Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 14/156/32. The contractual start date was in November 2015. The final report began editorial review in July 2018 and was accepted for publication in February 2019. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
none
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2019. This work was produced by Sheard et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
2019 Queen’s Printer and Controller of HMSO
Chapter 1 Introduction
Patient experience
Patient experience (PE) is a highly contested concept. On the one hand, it can be conceived as ‘what patients think of the care we deliver as a health-care organisation’. This is commonly divided into the functional aspects of care (e.g. timely management of symptoms, effective treatment, a clean environment and good transfers between teams/units) and the relational aspects of care (dignity, respect, involvement, honesty and clear communication). On the other hand, PE is about what it is like to be a patient who is ill in hospital: the lived experience. This distinction is important because data that are routinely collected by health-care organisations (referred to as ‘measured’ below) almost exclusively generate information about the functional and relational aspects of care, whereas social media [e.g. Care Opinion1 (Care Opinion CIC, Sheffield, UK; www.careopinion.org.uk)] and patient narratives (e.g. in experience based co-design) provide information about the lived experience.
Historically, there has been some debate about whether (measured) PE tells us anything new or useful about the quality of care in hospital;2 the argument is typically around whether or not patients really know enough to comment on the technical aspects of care, whether or not it is possible to robustly measure anything useful with the noise from patient expectations and the effect of outcomes on patients’ reflections on their experiences. In other words, if the treatment was not effective, they may judge their experience as poor irrespective of the quality of care. However, there is emerging evidence,3 an increasing policy focus and now near universal agreement that PE feedback is necessary in order to deliver high-quality care. 4–6 In 2013, Doyle et al. 7 concluded from their systematic review of the literature that ‘patient experience is positively associated with clinical effectiveness and patient safety . . . and is one of the central pillars of quality in healthcare’. Clinicians should resist side-lining patient experience as too subjective or ‘mood-oriented’. Similarly, a more recent review in the USA8 found that higher ‘star-ratings’ for hospitals based on PE feedback were associated with fewer complications and fewer readmissions.
In the UK, significant resource is now allocated to the collection of PE feedback and the Friends and Family Test (FFT) has become mandatory for all hospital trusts to collect. Other measures include local surveys or audits, annual patient surveys designed by the Picker Institute, complaints, patient-reported safety incidents and comments through social media outlets such as Twitter (Twitter, Inc., San Francisco, CA, USA; www.twitter.com), Facebook (Facebook, Inc., Menlo Park, CA, USA; www.facebook.com), Care Opinion1 and NHS Choices9 (www.nhs.uk). However, the overt emphasis and huge resource allocated to collecting PE data has not been matched by efforts to utilise and evaluate the impact of feedback on service improvement. 10 There is ongoing debate about the comparative value of quantitative or qualitative feedback,10 and the capacity for staff to make sense of different feedback types has not been given sufficient attention. 10 A recent systematic review of 11 studies of the use of PE data to make improvements in health care11 concluded that ‘a lack of expertise in QI [quality improvement] and confidence in interpreting patient experience data effectively’ was a significant barrier and that the use of data to inform changes in behaviour, and to measure the impact of these changes, needed greater attention. In fact, the authors note that it was very difficult to identify what changes had been made or to understand what impact these changes had.
We know that NHS staff are currently exposed to many different data sources but we do not know to what extent PE data synthesis/triangulation is undertaken, whether or not wards have capacity for combining multiple data sets, or if every measurement tool is considered in isolation or in a patchwork fashion. There is some evidence that failure to concentrate on staff capacity while continuing to collect ever more feedback could do more harm than good. A large multimethods study found that, in the main, health-care professionals (HCPs) do want to provide the best-quality care for their patients but challenges such as poor organisational and information systems can prevent them from doing so, in turn lowering staff morale in the process.
Conceptually, we approached this project using the notions developed by Wolf et al. 12 in order to provide us with our overall epistemological framework; these are that PE is deemed by HCPs at all levels to be the ‘sum of all patients’ interactions’. These are shaped by an organisational culture that influences patient perceptions across a continuum of care. Improving PE requires an approach that is ‘integrated’ (i.e. not simply a collection of disparate efforts), ‘person centred’ (recognising that the recipient is a human being) and a ‘partnership’ (with the patient and family). Our starting position for this study was a belief that it is critical that PE is embraced and respected by health-care providers.
Through this research, our research team seeks to unpack the requirements for an effective PE feedback process. We will contribute to shifting the debate around data types from one that focuses on whether quantitative or qualitative data are more useful, to one that seeks to understand how HCPs can be supported to make the right decisions themselves about what data types are most appropriate for their different needs. There is increasing recognition that the complexity of PE is such that multiple sources of feedback (mixed, qualitative and quantitative) are required to get a ‘full picture’. 13 The challenge in this project is to understand the purpose served by different sources of patient feedback, and when they are required. A thorough investigation of what PE data types exist, which are currently viewed and used, and how they could be used more effectively in service improvement, enabled us to articulate the different roles for different sources of data and provide a categorisation not currently available. Our aim was to develop a set of criteria that allowed us to more usefully conceptualise the forms of patient safety feedback and to develop this conceptualisation throughout the programme of work.
To do this, we drew on systems understandings of a safe and quality health-care system14 and extended the notion that what is required by staff to make improvements is ‘intelligence’,15 timely access to appropriate insights about how the system is currently working and what could be improved.
We will place significant attention on the processes necessary for staff to interpret and action data effectively. There is increasing recognition that using data sources to change practice demands creativity and skills from staff, and that these have been poorly defined to date; hence the tendency to present staff with data and expect change to happen as a result. 16 Knowledge mobilisation (KMb) frameworks3 help us to understand the dynamic and contextual factors that affect the way HCPs will make sense of any data they are provided with. Such frameworks recommend the move away from linear notions of data presentation for problem-solving, recognising that the aim of improving health-care organisation and delivery is characterised by uncertainty. HCPs will be required to use multiple data sources, as solutions will be distributed throughout the organisation and will be multifaceted. The burgeoning investment in improvement science in health-care settings reflects this shift in conceptualising knowledge for change. 17 Quality improvement methods support change as an iterative process with continual access to feedback (intelligence), yet these have not been combined with PE data systems to date, which, as Coulter et al. 10 state, is an opportunity that needs exploring. Thus, in the current project our aim is to bring together, via a toolkit, our understanding of PE feedback sources with improvement science, to bridge the gap between data and action.
The body of knowledge developed by our research team on issues of patient safety, experience and quality in previous studies informed our approach throughout. The Yorkshire Contributory Factors Framework (YCFF)18 provides an evidence-based model of the factors (e.g. teamwork, communication, access to resources) that affect the quality and safety of a health-care system. The effectiveness of a tool designed to capture patient feedback about these factors called PRASE (Patient Reporting for a Safe Environment) was the subject of a National Institute for Health Research (NIHR) Programme Grants for Applied Research-funded randomised controlled trial (RCT) led by this research team. 19 We have learnt many lessons from these studies about ways in which ward-based staff use the patient feedback on safety, and the organisational capacity that is required, but not always available, to learn from and support service or organisational-level change. 15 In addition, the framework for ‘The measurement and monitoring of safety’14 helped us to understand the range of different forms of intelligence required (e.g. access to information on past harm, reliability of behaviour, processes and systems, and sensitivity to what is going on). Although we recognise that this framework pertains to safety measurement and so is not entirely relevant here, what this framework did was to help us think about categorising the different forms of PE information available in health care in terms of their function. This, we propose, helps to provide a way of thinking about the disparate sources of information that could guide others in considering what types of data, collected at what level in the system, are most appropriate.
In this programme of work, we also draw on design principles as a basis for developing the Patient Experience Toolkit (PET). Following a process of creative co-design20 involving NHS staff members, managers, patients and researchers, our central tenet was to develop a toolkit to improve PE, rather than a toolkit to use PE data, recognising that achieving the second aim might not necessarily lead to achievement of the first aim.
We therefore undertook a four-stage research project. In stage 1, we conducted a scoping review along with qualitative inquiry to arrive at an extended conceptualisation of what PE is and the role of different types of measures for improvement. In stage 2, we used action research (AR) to co-design and implement a PET that drew on our conceptual work in stage 1, along with the expertise of an improvement scientist to ensure that it contained methods consistent with the principles outlined in dynamic KMb frameworks. The aim of this toolkit was to address the complex needs of HCPs in interpreting, making sense of and using cyclical action and feedback in their working context. Therefore, we brought participatory design expertise to ensure that HCPs directed the design of the toolkit based on their lived (personal) and professional experience. In stage 3, we independently evaluated the AR to make findings transferable to a wider audience and used these in stage 4 to inform a refinement of the toolkit for the wider NHS.
Our research addresses two of the four gaps outlined in the NIHR Health Services and Delivery Research commissioning brief: ‘How should patient experience data be presented and combined with other information on quality, effectiveness and safety to produce reliable quality indicators?’ and ‘What kind of organizational capacity is needed in different settings to interpret and act on patient experience data?’. We view these as related issues that our overarching research question seeks to address: what processes are necessary to ensure that hospital staff can receive and act on PE data so that they can effectively improve PE in their settings?
Evidence explaining why this research is needed now
-
The need to address systemic problems of quality and safety is paramount and the role of the patient voice in achieving this is considered vital. 4–6
-
Questions around PE feedback type and organisational capacity for interpretation and action are gaining momentum. 10,21,22
-
Recognition of the process skills required to generate and utilise knowledge for service improvement is increasing, which provides important insights for debates around the collection of and acting on PE feedback. 16
-
There is burgeoning interest in using improvement methodology to support front-line staff to make changes. 17 This approach has been applied to related fields such as patient safety improvement,23 but rarely in the field of PE.
-
Given the volume of PE feedback currently collected by the NHS, research is now urgently needed that seeks to understand how staff consider different forms of data, how multiple forms of data could be better presented and/or synthesised and then work with staff to engender real and lasting changes to services. Our proposed research will meet these objectives.
Aims and objectives
The overall aim of this project is to understand and enhance how hospital staff learn from and act on PE data. The following objectives will allow us to achieve this aim:
-
Understanding what PE measures are currently collected, collated and used to inform service improvement and care delivery.
-
Co-designing and implementing a PET using an AR methodology.
-
Conducting a process evaluation to identify transferable learning about how wards use the PET and the factors that influence this.
-
Refining and disseminating the PE improvement toolkit.
Selection of trusts and nomination of six wards
Three hospital trusts and six ward-based teams were involved for the duration of the whole study. This involvement spanned the qualitative study in Chapter 3, the co-design process in Chapter 4, the AR study in Chapter 5 and the mixed-methods evaluation in Chapter 6. The three trusts were selected to provide diversity in size and patient population. The smallest trust is a small district general hospital that serves an affluent town and a wider rural population. The middle trust is a medium-sized teaching hospital based in a large city with some of the highest levels of deprivation and ethnic diversity in the UK. The largest trust has a very large teaching hospital (one of the largest hospitals in Europe) in a major city that has pockets of affluence and deprivation.
The specialties of wards involved in the study was heterogeneous. We sampled the six wards based on a divergence of specialty, size and patient throughput. The specialities of the wards were: accident and emergency (A&E), male surgery (this represents two wards at different trusts), maternity department (including ante- and post-natal services), female general medicine and an intermediate care ward for older patients. Wards became involved based on consensus between the ward personnel and senior management at the trust, by a voluntary approach. This voluntary approach was necessary to ensure an initial high level of commitment from ward teams. We did not want any ward teams to feel under pressure to take part.
Chapter 2 Scoping review
This chapter discusses understanding different types of patient experience feedback in UK hospitals. 24
Introduction
The use of PE feedback as a data tool within quality improvement (QI) is receiving much interest as evidenced by a systematic review11 into how different types of feedback have been used in QI, and a more discursive piece on PE feedback as measurement data. 13 Both provide more questions than answers, relating to what feedback to collect, and how and when, and then how to use feedback to inform and measure QI. We also know that much feedback is collected but is not used10 and that when staff are presented with feedback and encouraged to use it for QI, they are faced with a complexity of social and logistical barriers. 15 In order to inform our later study where we develop an intervention to assist hospital staff in the effective use of PE feedback, we conducted the following review and categorisation exercise:
-
A scoping review of all types of PE feedback currently available to hospital staff in the UK that builds on previous reviews of surveys to include other feedback available.
-
Development of a list of characteristics that we believe to be important in understanding potential use within QI that consolidates what is already known, combined with our own research experience of improving quality of care.
-
Use of these characteristics to define types of feedback identified in our scoping review into distinct categories that can begin to inform policy-makers, researchers and those responsible for collecting and using PE feedback of their potential comparative uses.
Although we use NHS hospitals in the UK as a case study, we anticipate that our characteristics list and categories will be relevant to types of PE feedback in hospitals elsewhere.
Background
In the systematic review of uses of PE types,11 quantitative surveys were revealed to be the most frequently collected type of PE data (often mandated) but the least acceptable to health-care teams with respect to use within QI. Conversely, other more qualitative types of feedback, particularly those with high levels of patient participation, were less widely collected, suffering from a dearth of evidence around impact and prohibitive resource requirement, but were most acceptable to teams for use in QI. In England, there is currently a specific debate about the usefulness of the mandated FFT survey, which some proponents argue offers timely, continuous and local-level data ripe for use in QI at many levels,25 although others26 suggest that it is problematic for all uses (with respect to validity, representation and adequacy of detail). There is interest in utilising complaints as data,27,28 as well as online reviews,29 but as yet supportive evidence is lacking. Furthermore, in the mix are methods such as experience-based co-design (EBCD): frameworks that hospital staff can choose to adopt to collect very in-depth feedback specifically for use in QI but which, by their authors’ own admissions, require further evidence to justify directing significant resources at wide-scale use. 30 In addition to understanding comparative uses for different types of feedback, it is also suggested that health-care staff should mix different (and multiple) types of PE data to triangulate and obtain the most comprehensive information for improvement. 13 PE feedback is becoming a complex and potentially resource-intensive agenda for health-care organisations.
We therefore need to understand what data are needed within QI, and what PE feedback can offer in relation to this. It is helpful to return to fundamental concepts of QI and ‘data-as-measurement’ to unpack what it is that different types of information can offer. In 1997, distinctions between ‘The 3 Faces of Performance Measurement’ were made that we propose are useful to revisit now. 31 This referred to data used for accountability (outcome measurements used for benchmarking), data for the improvement process (used in problem identification and monitoring of change) and data for research (generating universal knowledge). 31 The first two uses are specifically relevant to the interests of this study. Indeed, the distinction between data that can be used for benchmarking and data that can be used to drive improvement has been made again more recently. 32 We can apply this distinction to PE feedback to help begin to understand potential roles.
In 2013, an evidence scan33 outlined a wide range of PE feedback types available, from quantitative surveys to qualitative patient stories, and characterised them by their ability to generalise (quantitative types) or describe (qualitative types). Subsequently, there have been two reviews of quantitative PE surveys available worldwide, one34 of which assesses them for utility arguing that their primary use is for ‘high-stake purposes’, such as benchmarking, hospital rankings and securing funding (an accountability function). The other35 reaches a similar conclusion and also summarises why they are not suitable for informing local (e.g. ward-based) improvement initiatives: they do not provide locally attributable data and they lack nuance and detail. It is also suggested that some surveys, if designed and supported to allow local interpretation and timely processing, could be used to monitor local improvement process as well. 36,37 With reference to the ‘3 Faces of Performance Measurement’,31 we can also see why other types of feedback may offer what is necessary to manage improvement processes: many of the more qualitative forms identified within33 (e.g. patient stories, complaints and interviews) are much more locally attributable and enable sufficient detail to suggest a use in the first step of the QI process: problem identification. Some sources such as the FFT in England do provide continuous information, so they could potentially be used for the monitoring of the improvement process. Finally, in addition to its locally attributable and detailed nature, it is argued that qualitative feedback can provide additional insights that are necessary to understand aspects of PE that are not possible to elicit through quantitative surveys;38,39 these are the ‘relational’ aspects that are so important to concepts of PE (e.g. how were you treated?), as opposed to the more transactional components (e.g. was a service provided on time?) that are targeted by surveys.
We need to build on the original distinction between accountability and improvement process31 to understand how various characteristics of data influence use in QI. As described36,37 quantitative surveys vary considerably (e.g. in their ability to capture local granularity). As evidenced,33 qualitative feedback also varies ranging from that provided within a complaint (because a patient seeks a response) to that provided within a patient story (because staff want to improve a service).
Methods
A scoping review of sources of patient experience feedback in the UK
We conducted a scoping review comprising academic databases, grey literature databases and websites, and supported this with our own knowledge from the field and that of our study steering group. We also hand-searched citations contained within returned documents. We identified surveys from the existing reviews34,35 and then conducted our own search of academic databases to update and focus on the UK only. We used grey literature and websites to identify other types of PE feedback that we knew, because of their non-validated status, were not likely to be found in academic journals, but more likely to be discussed in ‘guidance’ documents and commentaries. We adopted a scoping review method, and not a systematic review, because flexibility of search terms within grey literature was paramount to enable as wide a range of PE feedback to be returned. Comprehensibility of sources available in the UK, although important, was secondary to our aim of developing a characterisation system and categories that we anticipate could be applicable to other types as they emerge. We were informed by a five-step framework for conducting scoping reviews40 as shown in Table 1.
Step | Our approach |
---|---|
Identifying the research question | ‘What sources of PE feedback are currently available to hospital staff in the UK?’ |
Identifying relevant studies |
Search of academic databases (MEDLINE, CINAHL Plus, AMED, Scopus, Web of Science, PsycINFO, ProQuest Hospital collection) using terms: ‘patient experience’*’patient’ outcome assessment (healthcare)’, measures*. Timeframe: 2000–2016 Search of grey literature [Google (Google Inc., Mountain View, CA, USA), Google Scholar, Grey Literature Database, Royal College of Nursing database, Care Quality Commission (CQC), Collaborations for Leadership in Applied Health Research and Care (CLAHRC), The Health Foundation, HealthTalk.org, iWantGreatCare, HealthWatch, The King’s Fund, NHS England, NHS Institute for Innovation and Improvement, NHS Surveys, Mumsnet (Mumsnet Ltd, London UK),41 Patients Like Me, Patient Experience Portal, Patient Experience Network, Care Opinion, Picker Institute, Scottish Government, World Health Organization] using terms ‘patient experience feedback within the NHS’, ‘patient experience feedback of hospital care’, ‘NHS use of patient experience feedback of hospital healthcare’, ‘improving patient experience’, ‘patient experience toolkit’. These were subsequently adapted to suit different ways organisations use terms. Time frame: 2005–2016 – narrower than for academic databases owing to high number of returns. Note that different terms were used for academic databases than those for grey literature because of the different content likely to be returned through each route |
Study selection |
Inclusion criteria: any sources of feedback relating to PE of hospital care; patient or carer perspective; for use in UK acute hospital setting Exclusion criteria: sources of feedback relating to PE of specific aspects of quality such as safety, clinical outcomes, person-centred care, performance of individual clinicians or health-care staff, treatment/condition-specific experiences; not patient or carer perspective; not secondary care; those aged < 18 years; for use outside UK |
Charting the data |
The search returned 38 different types of PE feedback for which we immediately created three broad categories that were informed by our general understanding of the way feedback varied. This enabled the results to be displayed in four separate tables to aid comparison: Appendices 1 (17 × surveys), 2 (12 × patient-initiated feedback processes) and 3 (7 × feedback and improvement frameworks). We found that two types of feedback did not fit well in any, and placed these in a fourth table as Appendix 4 (other) This was deemed a reasonably objective task and was therefore performed by one researcher (RP) with two additional researchers confirming these categories |
Collating, summarising and reporting the results | Our categorisation exercise, detailed in Developing a list of ‘defining characteristics’, fulfils this stage |
Developing a list of ‘defining characteristics’
We established a consensus team to develop a list of 12 key descriptive characteristics to help understand the role of different feedback types in QI. This list is provided in Table 2. The team comprised the principal investigator (PI; Professor in Psychology of Healthcare), four health services researchers (one psychologist, two social scientists and one sociologist), two design researchers (concerned with the presentation and usability of patient feedback), one health-care improvement specialist and one patient involvement facilitator. The list developed iteratively through the following stages:
-
The PI first used the evidence referred to above, combined with own knowledge of QI and PE, to produce an initial list of nine characteristics and presented this to the consensus team.
-
The consensus team then added a further four characteristics to make 13 characteristics.
-
One researcher (RP) attempted to use this list to characterise six of the types returned through the review, finding that twelve of the characteristics worked effectively and only one did not so it was removed. This was ‘whether or not the feedback related only to specific patient groups’, which was not possible to ascertain from descriptions of the types.
Characteristics of PE feedback | Character options |
---|---|
Nature of data obtained from feedback | |
Type |
|
Level of applicability |
|
Evidence for validity (applies to surveys only) |
|
Timing of feedback |
|
Mode of feedback collection | |
Internal or external sources | Formal hospital system for collecting feedback and externally supported websites |
Quantitative | Survey (paper, telephone, internet or a combination) |
Qualitative | Interviews, observation, focus groups |
Availability of feedback | |
Requirement for feedback |
|
Supporting hospital systems |
|
Timeliness of feedback availability to service |
|
Regularity of feedback |
|
Perspective captured | |
Who initiatives feedback? |
|
Who provides feedback? |
|
Defined role in QI | |
Extent of the defined role |
|
This list of 12 characteristics was then presented to the study steering group to ensure that it made sense beyond the consensus team. This process led to clarification of the definitions and potential variability (character options) of each characteristic as listed in Table 2.
Five broad headings (i.e. nature of data obtained, mode of feedback collection used, availability of feedback, perspective captured and defined role in QI) were used to group the 12 characteristics.
Assigning ‘characters’
These characteristics were then applied by the researcher RP to all returned types of PE, which generated ‘raw’ descriptive categories (see Appendices 1–4). These were checked by two other members of the team before finalising.
Findings
Four categories of types of patient experience feedback
We then used our characteristics list to further analyse, understand and subdivide the descriptions contained in these ‘raw’ tables. As well as enabling us to provide a more nuanced presentation of the distinctions between PE types, this process led us to more indicative titles for the categories than those we used as appendices titles. The refined categories and their subcategories are shown in Box 1. The distinctions that we make between them are now described, highlighting potential implications for roles within improving PE.
The NHS Adult Inpatient Survey (England). 22,42,43
Scottish Inpatient Patient Experience Survey. 44
Inpatient Patient Experience Survey (Northern Ireland). 45
Any levelYour NHS Patient Experience Survey (Wales). 46
Service or specialty:
NHS A&E Survey (England). 43,47
NHS Maternity Services Survey (England). 43
Scottish Maternity Care Survey. 48
1b. Voluntary Hospital levelHospital Care & Discharge. 49
Any levelPPE Questionnaire 15. 50
OxPIE. 51
Newcastle Satisfaction with Nursing Scale. 52
VOICE survey. 53
Service or specialty:
PEECH. 54
ICE Questionnaire. 55
New Models Study. 56
Urgent Care System. 57
Patient carer diary. 58
2. Patient-initiated qualitative feedback 2a. Formal hospital systemLiaison Service concerns. 9,62–64
Hospital-supported feedback cards.
Hospital-supported websitesNHS Choices. 9
Care Opinion (if adopted). 1
iWantGreatCare. 65
Facebook set up by ward/hospital.
2b. No hospital systemCompliments and thank-you cards. 33
WebsitesMumsnet. 41
Twitter.
Google reviews of hospitals.
Facebook (generally).
Care Opinion (if not adopted). 1
Other websites.
3. Feedback and improvement frameworks 3a. Focus on ‘collection’Emotional Touchpoints. 66
Discovery interview. 67
3b. Focus on ‘collection’ and ‘action’Patient Journey. 68
Kinda Magic. 69
Experience based co-design (EBCD)/accelerated experience based co-design (aEBCD). 70
Fifteen Steps Challenge. 71
Always Events. 72
4. Other Mandatory (England)FFT73
VoluntaryHowRWe74
ICE, Intensive Care Experience; OxPie, Oxford Patient Involvement & Experience Scale; PEECH, Patient Evaluation of Emotional Care during Hospitalisation; PPE, Picker Patient Experience; VOICE, Views on Inpatient Care.
There were 17 types of feedback that fitted into the first category of ‘Hospital-initiated quantitative surveys’. 22,38–40,42–58 Common to almost all of these types of feedback is that the data are predominantly quantitative, initiated by hospitals, targeting patients and not carers, and involve a significant delay (caused by processing) in providing information back to the organisation. However, closer inspection reveals a distinction between those that are mandated for high-level organisational use (for whole organisation or whole A&E or whole maternity departments) at regular but infrequent intervals, and those that are offered as voluntary tools for use as and when an organisation decides. The former most clearly exhibit accountability features: providing organisational-level data (within parameters defined and initiated by the organisation) and validated to make generalisations and comparisons (between organisations or over time) when conducted for large samples. On the other hand, with the exception of one survey, Hospital Care & Discharge,49 the voluntary surveys can be applied at any level at a timing to suit, or are especially designed for use within a local service or specialty, such as the Intensive Care Experience (ICE) Questionnaire,55 without prescribing regularity. Unlike the mandatory surveys, only some are clearly validated. Potentially, these more flexible surveys that elicit local-level information offer more scope for informing or monitoring local improvement of PE. Only one survey, Your NHS Patient Survey Wales,46 does not conform neatly to this subdivision. This survey is strongly recommended for use, not mandated, and is designed for use at any level. This implies more flexibility, and that perhaps it has been designed to inform or monitor local improvements as well as to provide accountability.
We call the second category ‘Patient-initiated qualitative feedback’ and include 12 feedback types here1,9,41,46,59–65,75 that exhibit common traits: they provide qualitative data (applicable to any level of the organisation), are initiated by patients on an ad hoc basis (whenever they choose to) and the feedback is available to the organisation quickly (referred to as in real time). The concept of validity is not applicable because all data are provided on a case-by-case basis. Within this category, the significant distinction is between those types that are formally supported, which could be because they are mandated to do so (e.g. by complaints,46,60,61,75 concerns9,62–64 and NHS choices),9 or because they choose to adopt a system (e.g. to set up a ward-based Facebook page or buy into iWantGreatCare65 to organise their feedback). Other types have no supporting system in place and include informal feedback (e.g. compliments, thank-you cards) that is received but not perceived as data requiring attention or processing. We include a caveat here because some hospitals could have more formal systems for handling these (we know anecdotally that this happens) but this is not widely acknowledged or articulated as a process. This subcategory also includes websites external to the organisation [e.g. Facebook, Twitter, Mumsnet,41 Google reviews] where patients/carers may upload feedback, but there is no guarantee that this will be viewed by hospital staff. Other less well-known sites could also exist on the internet. Care Opinion1 currently spans both subcategories: it is offered as a formal system of data management for a fee, if hospitals choose to adopt this. If not formally adopted, the platform could still be used by patients to upload feedback that may or may not be viewed by the hospital.
In summary, this category offers a different kind of ‘data’ than that offered in category one and, therefore has a potentially different role within QI. In category one, feedback offers evidence-based scope for use in benchmarking and monitoring of organisational trends. Category two feedback provides more local-level information that would not be valid for use in that way. It exhibits some characteristics (e.g. nuance, specificity) that suggest potential use within local QI processes, especially problem identification. Currently, feedback within this category is, however, presented largely on a case-by-case basis and not as collated data ready to use. This makes its proposed role as a data source more tentative than the surveys of category one, and we return to this issue in Discussion.
We name the third category ‘Feedback & improvement frameworks’, and this includes seven types of feedback66–72 with some common, defining features: feedback is exclusively qualitative, can be collected for any level of service by a variety of qualitative research methods with a varying degree of prescription in this regard. Interviews are common but focus groups, observation and shadowing all feature here. All types elicit rich data that take time to process. All have a defined role within QI, albeit to a varied extent. Feedback collection is initiated by staff, but, in striking contrast to surveys (which cover issues deemed important to organisations about their service delivery), qualitative methods are used in ways designed to tap into patients’/carers’ authentic voices. Like category two, the data elicited are qualitative but they differ significantly owing to embedding within qualitative methodology, which elicit rich (collated) data sets ready to use.
The nature of the third category can be explored further using two subcategories. In the first subcategory are two types of feedback that focus primarily on eliciting authentic voice (Emotional Touchpoints66 and Discovery Interviews). 67 Their associated guidance makes reference to the use of feedback to make changes, but this aspect is not covered in detail. The second subcategory includes frameworks for linking the collection of feedback (still attempting to tap into patients’/carers’ authentic voices) to QI techniques. With reference to QI, these frameworks do not offer data suitable for accountability (nothing generalisable for large samples). They do offer information appropriate for the QI process (problem definition and monitoring) but the specific ways in which they do this varies considerably. Some advocate linking feedback directly into the continuous learning process [Patient Journey = AR,68 Kinda Magic = links to metrics collected separately,69 Fifteen Steps and Always Events = mainstream QI approaches such as plan–do–study–act (PDSA)]. 71,72 Both EBCD and accelerated experience-based co-design (aEBCD) recommend collecting qualitative feedback of impact to assess perceptions of how the service has changed and also suggest collecting other measures about the change, for example, cost-savings to a service. Three also stand out for the way they use feedback as data for problem definition. Within EBCD/aEBCD and Always Events, feedback is interpreted together with staff and patients/carers in a process of co-design so that contextual meaning informed by those who work in the service can be added. This is an interesting category that appears to push the boundaries of how we consider feedback as data within QI. We return to this observation in the Conclusion section.
Finally, we identify a fourth category of miscellaneous, ‘Other’, with two types of feedback:73,74 FFT73 and HowRWe,74 which do not fit into any of the above three categories. They are both surveys that hospitals can initiate, asking standardised questions, but unlike those surveys in category one, they are not designed to capture large numbers of data (lots of questions) infrequently, but instead they are very short and designed to be used more frequently, potentially providing a more continuous flow of PE feedback. The FFT has only one question and HowRWe has four. They both allow qualitative comments to be added and they can both be applied to any type of health-care setting; however, this is where their similarities end. Most significantly, FFT is mandatory in England, whereas HowRWe is a voluntary tool and is therefore much less widespread. The data arising from the HowRWe standardised questions are validated to provide comparable data over time and between areas, whereas the data arising from the FFT standardised questions are not. Potentially, the HowRWe tool has more obvious potential for the measurement and monitoring of trends within QI over time and between areas than FFT. The comments provided by both tools could be used within this process. These comments can be likened to the data arising from the feedback types included in category two: qualitative and context specific. However, just like these data types, comments are not provided as collated data ready to be used and the steps to enable them to be used as data are not specified.
Discussion
In this study we have reported a three-stage process to help make sense of different types of PE feedback on offer to hospital staff in the UK. Using concepts of measurement as defined for QI,31 prior commentary on measurement for PE10,11,13 and our previous experience of researching this field, we sought to develop an understanding of the potential roles of these different types of data within QI. Our scoping review identified 38 different types of PE feedback ‘on offer’ to staff within UK hospitals. Using a consensus exercise, we drafted a list of characteristics that we believed to be important indicators of potential roles for each type. Using these characteristics to assess each type, we arrived at four distinct categories that we named: ‘Hospital-initiated quantitative surveys, ‘Patient-initiated qualitative feedback, ‘Feedback & improvement frameworks’, and ‘Other’. We have described above the nature of each of these categories with reference to roles within QI. In addition, we make the following observations.
Hospitals currently have limited access to data that can potentially help to inform and monitor local patient experience improvement
Of the mandated PE feedback types available, none of these would appear immediately suitable for informing and monitoring a local improvement process (e.g. ward level). Mandated feedback currently comprises quantitative survey data [the national inpatient surveys for whole organisations,22,42–46 A&E,43,47 maternity departments,43,44 complaints and liaison service data,9,59–64,75 one form of online feedback (NHS Choices)9 and the FFT results]. 73 As explained above, mandated surveys are most suitable for accountability purposes but do not provide locally relevant data that are accessible to those who need it, in a timely manner35,76 that would be required for informing and monitoring the QI process. The qualitative, locally applicable information collected via the FFT test, mandated for England, offers potential within the QI process;25 but this proposal is also fiercely questioned,26 described as a laudable ambition thwarted by the quantitative rating system that currently forces hospitals into achieving acceptable response rates at the expense of considering and utilising the qualitative comments effectively. There is interest in the increased use of other mandated feedback (from complaints and liaison services) by coding and theming into data sets. 27 However, there are challenges to these proposals,28 relating to system practicalities (collation of case-by-case complaints), the nature of the story told (complex and difficult to code) and availability (often infrequent and inconsistent in style). In short, seen from a QI perspective, mandatory PE data (national surveys, FFT, complaints/concerns and NHS Choices) currently appear to offer little ready-to-use data, with respect to informing and monitoring local PE improvement.
The potential for other types of feedback to help inform and monitor local improvement process is not yet clear
Other types of feedback are available should hospitals wish to use them. Hospitals could use voluntary surveys of category one, these offer more granular data and can be used more flexibly if analytical capability exists. 38 Similarly, there are proposals that online comments could be used more: they are context-specific, qualitative, and provide almost instantaneous information. Some online platforms such as Care Opinion1 and iWantGreatCare65 are being developed and offered to hospitals for this purpose, and some hospitals/wards are establishing Facebook pages as a dedicated place to collect feedback. As well as supporting use of these formal platforms, there is an emerging interest in harnessing ‘the cloud of patient experience’ from social media that exists in informal ways (e.g. Twitter, Facebook, Google). 77 All of these proposals warrant further exploration. We suggest that the process of using PE data within the improvement process needs further conceptualisation in itself before we can judge the comparative value of these different feedback types and we use some observations on category three feedback to develop this proposal.
Beyond metrics: what we can learn from ‘feedback and improvement frameworks’
Category three feedback significantly differs from category two feedback despite them both being qualitative. Each category three framework is concerned with eliciting the authentic patient voice using in-depth qualitative methods. Two (EBCD/aEBCD and Always Events), overtly attempt to develop shared meanings (staff and patients/carers) from the feedback. This indicates a shift away from the notion of patient/carer feedback as a static metric (objective data) that can be used to directly state what should be improved, and then be assessed again to measure impact. Within EBCD/aEBCD this is described as a co-design and co-creation process involving techniques that aid critical, collective reflection. 25,78
We propose that the way in which EBCD/aEBCD and perhaps other such frameworks use the patient voice to inform and to monitor progress is variable and tentative, and that the commonalities between these ways are ripe for further exploration. Of relevance to understanding this further is the arrival of the term ‘soft intelligence’ within QI more broadly, in which value is placed on understandings gained through everyday interactions and caution is urged with respect to reducing such insights to metrics. 79 When it comes to making decisions about how the seemingly ever-growing stream of PE feedback should be used, these broader conceptual developments about how the patient/carer voice can inform change appear extremely relevant and could add much to traditional concepts of QI as articulated by the ‘Three faces of Measurement’31 over 20 years ago.
Owing to the flexible approach taken to search terms, our scoping review may not have revealed all potential feedback types available in UK hospitals. In addition, because of the occasionally subjective nature of these search terms, a repeat exercise by others may not yield exactly the same results. This is also true of the characterisation and categorisation exercises in which some subjective decisions were made. In some cases there was ambiguity and we used our characteristics list as a sensitising framework rather than an absolute.
Conclusion
Our scoping review has confirmed that there are many different types of PE feedback available, or potentially available, within UK hospitals. However, our characterisation and categorisation study has revealed that, within these, there are currently no ‘ready-to-use’ data sets for informing and monitoring improvements to PE, apart from mandated data relating to high-level organisational trends. Hospitals are currently being presented with many options for engaging with the other types of feedback, some not previously regarded as data, which either already exist in their systems or that could be collected in addition to existing feedback. Some types being offered are integrated frameworks for collection and improvement. We know that hospital teams are already struggling to handle feedback that they are mandated to collect,80 therefore informed decisions about these options are crucial. To support this, we propose further analysis and conceptual development of the role of PE feedback within QI, and that the categories we present in this study are a contribution to this effort.
Chapter 3 Qualitative study
This chapter discusses the problem with patient experience feedback: a macro and micro understanding. 81
Introduction
The PE agenda is reaching a zeitgeist moment in many health-care systems globally. Patients are increasingly giving feedback on their experiences of health care via a myriad of different methods and technologies. Most commonly, these take the form of national surveys, formal complaints and compliments and social media outlets. Various publications outline a range and diversity of qualitative methods for gaining rich feedback from patients. 17 Several systematic reviews have identified a range of quantitative survey tools that are used across the world to capture PE in an inpatient setting. 34,35 These include large-scale surveys, such as the NHS National Inpatient Survey in the UK and the Hospital Consumer Assessment of Healthcare Providers and Systems in the USA. 35 Currently in the UK, major resource is being given to the collection of the FFT,82 which has been mandatory since 2014 for all acute hospital trusts to collect.
A significant driving force for the current impetus and focus on gathering PE feedback in the UK arose from national-level recommendations such as the Francis4 and Keogh reports. 6 In addition, ‘Better Together’ in the USA and ‘Partnering with Consumers’ in Australia demonstrate that this focus has been mirrored internationally. 19 It is now widely acknowledged that patients want to give feedback about health care19 and that staff should be listening to what their patients say about the experience of being in hospital. 83 Yet, whether staff can use this feedback to make changes to improve the experiences that patients have is now a central concern. 10,11,15,22,37,84 This pertains to differing areas of the health-care system from senior management at the level of the hospital board (formalised group of directors) down to individual clinicians working on the front line. Hospital boards have received recent pressure to understand the ways in which they use patient feedback to improve care at a strategic level85 and about how they govern for QI. 86 There is a concern that the ever growing collection of feedback is not being used for improvement but, rather, represents a ‘tick box mentality’ of organisations thinking that they are listening to their patients’ views but actually not doing so. 87 Recent work in the UK has looked at how HCPs make sense of why patients and families make complaints about elements of their care88 and found that it was rare for complaints to be used as grounds for making improvements.
Several studies have looked at teams of front-line clinicians to understand how ward staff can engage with patient feedback to make meaningful improvements. 15,22,84,89 Most of the literature in this area finds that, despite enthusiasm to make improvements and despite the vast rhetoric around this, proactive changes are often minimal and largely concentrated on ‘quick fixes’. 11 It could be said that we are currently at a key pivotal moment in terms of this debate, in relation to both national and local policy and what is occurring ‘on the ground’. This is because there is an ever clearer and acknowledged push for improvement to arise from patient feedback, but individuals and systems are constrained from doing so.
In this study, we report the findings from a qualitative study undertaken at three hospital trusts in the north of England that explored the PE landscape. We were most interested in which types of PE data were being collected, how staff were or were not using these data and whether or not there was a relationship with improvement on the wards. Here, we base our reporting on the question ‘what is impeding the use of patient experience feedback?’, which is examined through both a macro and a micro lens. We concentrate on this finding, as it was considered by the participants to be of central importance.
Method
We conducted a mixed method qualitative study using focus groups and interviews across three NHS hospital trusts in the north of England. This qualitative study was the first work package in a programme of research whereby the overall purpose was to develop a PET to assist ward staff to make better use of PE feedback. The three trusts were selected to provide diversity in size and patient population. Then, two wards per trust were approached to take part in the study, leading to six wards working with us. We sampled the six wards based on a divergence of specialty, size and patient throughput. The specialties of the wards were A&E, male surgery (this represents two wards at different trusts), maternity department (including ante- and post-natal services), female general medicine and an intermediate care ward for older patients.
Data collection
The fieldwork took place between February and August 2016. The University of Leeds ethics approval was secured in October 2015. All participants gave written, informed consent. Ward staff took part in focus groups and management staff took part in individual in-depth interviews. Ward staff mostly represented opportunistic sampling and management participants were sampled for maximum variation. Ward staff predominantly encompassed senior and junior nursing staff, support workers and the inclusion of allied health professionals in some of the focus groups. Management participants were drawn from a range of roles occupying middle- and senior-level hospital management, such as PE managers or heads of PE, matrons, heads of nursing (and their deputies), research leads, medical, quality, risk, governance and performance directors. The bulk of interview participants worked directly in or managed PE teams.
Seven focus groups and 23 individual interviews were conducted. Focus groups ranged from three to seven participants and two management participants were interviewed as a dyad. The average length of an interview was 55 minutes and 45 minutes for a focus group. In total, 50 participants took part in this qualitative study. Two topic guides were devised; one for the data collection from ward staff and another for management participants. Headline topic guide questioning was derived from the literature. Focus group questioning centred on what types of PE feedback the participants received, how they engaged with it and responded to it, and where/how it fitted in with their everyday clinical work. Interview questioning explored the different kinds of PE feedback available to the trust and how these were generated, prioritised and managed at the level of the ward, directorate and whole organisation. The formats of the topic guides and that of interview questioning were flexible to allow participants to voice what they considered to be important. All focus groups and interviews were conducted face to face in staff offices, digitally recorded and then transcribed by a professional transcriber. All participants gave written, informed consent to take part in the study. Author B collected all interview data. Authors A, B and C all collected focus group data. All are experienced qualitative health researchers with doctorates in their respective fields.
Analysis
Authors A and B took the same five interview transcripts and each independently developed a provisional descriptive coding framework. These five transcripts were chosen as those that were representative of the whole interview data set in terms of spread across the trusts and general content. The same exercise was repeated for the focus groups, albeit with three transcripts. Authors A and B held an intense analysis session where they met (along with author D) to discuss the differences and similarities in their coding frameworks, although there was general parity among them. Author A then returned to the selected transcripts and immersed herself in the data in order to devise an overall meta coding framework that would allow for data from both the interviews and focus groups to be coded together. This meta coding framework sought out themes on a conceptual level rather than a descriptive level; that is, rather than simply describing what the participants discussed, author A looked for the differing ways in which PE feedback was approached conceptually across the participants involved in both methods. Differences and similarities were identified, with Author A noticing that participants discussed the topic at different levels, with the management interviewees tending to view PE feedback in a macro manner (both explicitly and implicitly) and the ward staff focus group participants viewing it in a micro manner. The meta-level coding framework was checked with author B for representativeness and accuracy. After slight modification, author A then coded all transcripts and some subthemes were modified as coding progressed. Author A conducted further interpretative work in order to write up the findings. Initially, we began by conducting a classic thematic analysis90 but realised that this was not sufficient for our needs, as thematic analysis often relies on portraying a descriptive account of participants’ narratives. Instead, we conducted a high-level conceptual analysis. The analysis was wholly inductive and, as such, we did not structure it on any existing theoretical frameworks.
Findings
Here, we briefly set the scene by describing the main sources of PE feedback in the UK before moving on to focus entirely on ‘what is impeding the effective use of patient experience feedback?’ All participants have been ascribed a number and a generalised descriptor of their role, rather than their precise role, in order to protect their identity. We will discuss two distinct groups of participants, which we will call ‘ward staff’ and ‘managers’.
Setting the scene: what are the sources of patient experience feedback?
All participants were able to name a wide variety of the types of PE feedback that they had encountered and interacted with in their professional roles. This took the form of formalised written sources such as the FFT, complaints and compliments, thank-you cards, Patient Advice and Liaison Service (PALS) communication, patient stories, NHS Inpatient Survey, local surveys and other initiatives such as ‘You Said, We Did’. Senior leaders within the organisations spoke about Care Quality Commission (CQC) inspections and the use of social media as outlets for feedback, although ward staff paid less attention to these sources of data. When first asked to discuss PE feedback, ward staff spoke about the more immediate, direct, ‘in the moment’ verbal feedback from patients on their ward that they received in an impromptu manner during the course of a shift. This often took the form of patients complaining verbally (in an informal manner) about their care or the environment to the clinician caring for them or to a more senior staff member. Conversely, it also included spontaneous thanks or praise given in an interpersonal exchange. In this study, we focus on formalised sources of PE feedback and we discuss factors surrounding their effective use, as per our key areas of interest and research brief. However, it should be acknowledged that informal feedback was often used by ward staff in a timely way to improve the experience for the needs of a particular patient.
What is impeding the effective use of patient experience feedback?
We chose to focus on the factors that are impeding the use of feedback rather than an account that paid equal attention to the factors that were assisting it. Although there were certainly instances where individual personnel and small teams had instigated processes and ways of working that were beneficial, these accounts were localised and not of sufficient importance to most participants about the topic at hand. Furthermore, attempts to improve issues identified in feedback sometimes led to unintended consequences, which further problematised an already complex and fraught task. When participants talked in positive terms about PE feedback, they often spoke of idealised situations or what they would like to see happen in the future rather than what was currently happening in practice. Overwhelmingly, the participants interviewed across the data set pinpointed significantly more negative factors within their current working practices when trying to use patient feedback than positive factors and this is, therefore, where we place our analytical attention.
There is a clear division between a macro and a micro understanding of how participants discussed PE feedback within their health-care organisation. Management participants commented on feedback and the use (or not) it serves at the level of the organisation, whereas both ward staff and management pinpointed the problems at a micro level with the function and usefulness of the individual data collection sources.
At the macro level of the health-care organisation
Considering the data set as a whole, possibly the most striking element is the overwhelming nature of the industry of PE feedback. Ward staff at one hospital department at Trust C stated that they were collecting around a thousand FFT cards a month, in addition to all the other patient feedback received. Both management and some ward staff participants across the whole sample felt overwhelmed and fatigued by the volume and the variety of data that the trust collected:
So we have got the Friends and Family Test, which produces, as I am sure that you are aware, reams and reams of information but nobody is really quite sure what to do with that information. Because there’s just loads of it. I mean our goal is about 50% of people that leave fill in a card.
Trust B, interviewee 2, patient experience management
At each of the three hospital sites, a significant, system-wide level of resource, effort and time was being expended that primarily focused on maintaining the collection rates of feedback. This was coupled with layers of hierarchies and bureaucratic processes surrounding data collection, which were felt to be confusing to staff and patients alike. Mirroring the current NHS staffing situation among the clinical workforce, some management participants felt that they did not have enough staff or appropriate expertise (often stated as qualitative expertise) in their immediate teams to be able to work effectively to produce meaningful conclusions from the data they received. This was despite an abundance of resource given to collecting feedback on the ground, leading to a bizarre situation whereby masses of data were being collected from patients but a lack of skill and personpower prohibited its interpretation and, therefore, its use:
So with all the ways and means of collecting the feedback, it’s how to actually pull out a theme to actually make an improvement. It feels as if we are overwhelmed with everything and the next step for me is, we need to actually take it to the next level and start learning from it.
Trust B, interviewee 3, patient experience management
At the centre of this situation was the idea that data collection in and of itself was considered to be the most important achievement, rather than a focus on how the feedback could be used to drive improvement. In relation to FFT, there was a narrow focus on each ward’s response rate (what percentage of their patients had completed a FFT form) and enlarging this response rate at a detriment to other activities. Regarding complaints, there was an overt focus on both the timeliness of response to complaints and on trying to reduce the volume of them rather than on achieving an understanding of what an effective response looked like and how this could be emulated:
Number of complaints is one thing, great, are they getting more or less? Less, great. Are they responding to them within our forty-day timescale? Yeah, great. For me that’s all nice and boxes we can count and tick, but actually what are the main complaints? What are the main themes? What are they doing about them? What’s on their action plan? So that we’re, we want to shift to a more action-based approach rather than counting.
Trust C, interviewee 4, performance manager
Management participants often talked in corporate terms about where the responsibility for PE feedback sat within the hospital hierarchy, which often demonstrated that PE was a fractured domain, spread across several different disciplines. However, some senior leaders explicitly articulated the artificial nature of this division and how this splintering of the response to patient feedback was hindering the ability for change to occur as a result of it. For instance, in one trust the responsibility for complaints, PALS and FFT were split across three different teams who had little crossover and, therefore, minimal capacity to consider this wealth of feedback from patients as a whole. In a different trust, a senior manager had noticed that this division was holding learning back and brought representatives from these teams together once a month in a formal event. A few participants noted how the electronic systems for collating the different sorts of feedback were completely distinct, which further compounded the lack of cross-team working. The division between complaints and PALS (both as a concept and practically) was remarked on by some management participants as being arbitrary and unnecessarily confusing to patients and the public. Some participants spoke about how several different initiatives on PE were simultaneously ongoing within the same trust, with little ability for the linkage between these to be made explicit as their remit was under different teams.
The participants interviewed for this study nearly all saw an immense value in PE feedback and most believed that it should receive a high organisational priority at a strategic and at a trust board level. Yet, this was not often the situation ‘on the ground’ in their organisations and the culture around this was said to be hard to change. Patient experience was sometimes felt to be the poor relation of patient safety and finance, with a lesser emphasis and priority placed on it:
They [directorate representatives] have to give an explanation as to why performance is bad in terms of finance, access, targets, the waiting lists and quality is one of the agenda items, but it seems it will always be the item that is skimmed over. Patients’ experience and stuff, it is on there but no one ever really pays attention.
Trust C, interviewee 3, patient experience management
Related to the above, management participants discussed where the responsibility for PE ‘sat’ within their trust. Usually, PE was housed under the nursing remit and patient safety under the medical remit. This division was said to be unhelpful by several participants who felt that PE was therefore automatically seen as an issue for corporate and shop-floor nursing staff to solve:
My only nervousness is it’s done almost entirely through nursing . . . and there’s rafts of things [feedback] that are about doctors . . . I think there is a perception, you know, the doctors do the doctoring thing and nurses do the patient care thing and it’s nurses and it’s about wards when actually when you look at it, actually quite a large volume [of feedback] is nothing to do with nurses whatsoever.
Trust C, interviewee 4, performance manager
In a drawing together of the points raised so far, it is clear that current patient feedback systems do not generally allow for learning across the organisation. The collection of PE feedback seems to be the focal point, with an intensive resource given over to this, while fractured and disparate teams struggled to make sense of the data or to be able to assist ward staff to do so.
At the micro level of the feedback itself
Both management and ward staff participants spoke about the usefulness of the PE feedback that they received. Usefulness was often aligned to whether or not it was felt that improvements could be made based on the feedback. Overall, it was felt that most wards were awash with generic and bland positive feedback that rarely guided them in identifying specific elements of positive practice. This contrasted with a smaller amount of negative feedback where patients often pinpointed precise instances of poor PE:
Usually the positives are very general, when they’re negative it’s something very specific; ‘the bins are noisy, the buzzers don’t get answered on time, I didn’t get X, Y and Z at teatime . . .’
My lunch was cold.
Yeah, usually they’re quite specific, whereas the good and the positives tend to be more general: ‘the whole ward was clean and tidy, the staff are all lovely’, do you know what I mean? So I feel sometimes we don’t always necessarily get that much information about the positives, it’s always a very general positive.
Trust A, Focus group 2
A different problem with the feedback sources currently received related to what extent ward staff were or were not able to interact with and interrogate the raw data that were passed onto them by PE team members. Senior ward staff participants were sent spreadsheets of unfiltered and unanalysed feedback. In some instances, this ran into hundreds of rows of text for a month’s worth of data. The complexity and volume of the data that ward staff had to contend with was often seen as overwhelming to the extent that some ward staff deliberately chose not to engage with the data. The two main issues that prevented ward staff from using, or in some cases even looking, at PE feedback were a lack of time and a lack of training. Taking time away from clinical duties to ‘sift through’ a large number of unsorted data was not felt to be a high priority. Likewise, it was evident that ward staff did not have the required skills to be able to perform sophisticated analytical tasks on the data they received:
The stark reality is most front-line staff, and even most managers, really struggle to find the time to look at the kind of in-depth reporting we get back. We get reports back that are, you know, extremely bulky documents and people struggle to have the time to really read them, understand them and use them.
Trust B, interviewee 4, patient experience management
In general, the raw data from patients were said to be difficult for ward staff to interact with and some participants questioned whether or not the current process was fit for purpose. A few management participants spoke about how a lack of decent analysis before the data were passed onto ward staff simply worked to compound the problem even further. Even more difficult to achieve was the idealised notion that differing data sets should be brought together to provide an overall picture of what patients thought about an individual ward. Despite all of the above difficulties, there was an expectation by senior leaders that ward staff should be using the feedback to make improvements to the ward.
Compounding the above problems of data interrogation were underlying problems that ward staff perceived to be inherent in the data already collected, even before they reached the data on the frontline. Most significantly, timeliness was seen as one of the main concerns, with it being difficult to engage ward staff with data that is not real time. A specific example of this is the NHS Inpatient Survey where patient feedback is viewed months after it has been collected. Frustrations were attached to receiving feedback that was considered historical if ward staff had already started to make improvements to address known problems. The FFT data were said to be too late if they reached ward staff a few months after they were collected. Ward staff participants struggled to remember the circumstances of a complaint if the complaint was made several months after the patient had stayed on the ward.
A specific idea raised by ward staff participants concerned the limitations of current PE feedback sources, particularly those that are nationally mandated such as the FFT. Throughout the data set, there were numerous accounts of how the FFT was considered ‘more bother than it was worth’, superficial, unhelpful and distracting. It was unfortunate to learn that in two trusts the FFT had replaced several local patient feedback initiatives, which ward staff had previously placed a large emphasis on as being of use to their everyday practice and learning:
I know the feedback we get from it [FFT] is not as good as what the You Said We Did information that we used to get back.
‘Cause that was very, very specific wasn’t it?
Yeah, it was, you could relate to it and you could look at it and you could help to action things.
Trust B, Focus group 1
Considering the above micro view of the participants’ narratives, it can be seen that a large amount of feedback is positive but simultaneously generic in nature. Ward staff struggle to interact with the feedback as it is presented to them in its current format and there are questions raised over the inherent value of the sources, specifically in relation to factors such as timeliness.
Discussion
From the findings given above, we can see how the ability for effective use to be made of PE feedback is hindered at both the micro level (of how individual clinicians and teams of staff have difficulty engaging with the data sources) and the macro level (how organisational structures are unwittingly preventing progress). This is played out through various means in a macro sense such as a lack of pan-organisational learning, the intense focus on the collection of data at the expense of understanding how they could be used and fractured PE teams who want to assist ward staff but find this difficult. In a micro sense, a large amount of generic positive feedback is seen as unhelpful, with ward staff struggling to interpret various formats of feedback while they question the value of it because of factors such as the timeliness and validity of the data. The macro and micro prohibiting factors come together in a perfect storm that provides a substantial impediment to improvements being made.
Several authors11,15,37 have recently pinpointed the essential problem at the heart of the momentum to collect ever more patient feedback; that is, almost everyone interested in health-care improvement and certainly those providing front-line care now have a vested interest in listening to patients,15 yet a myriad of challenges are still preventing the wide-scale effective use of the data for QI. This problem is set against a backdrop of a simultaneous ‘movement for improvement’,91 where grassroots, bottom-up approaches to health-care improvement are being championed. It is interesting to note that despite this recent cultural turn in the literature, which acknowledges the ‘patient feedback chasm’,92 most commentators have so far paid attention to the problems only at the micro level. Flott et al. 37 discuss problems related to data quality, interpretation and the analytical complexity of feedback and then put forward ideas about how the data themselves could be improved to allow staff to engage with them better. Likewise, Gleeson et al. 11 found a lack of expertise among staff to interpret feedback and issues surrounding the timeliness of it, coupled with a lack of time to act on the data received. Sheard et al. 15 have explored why ward staff find it difficult to make changes based on patient feedback. They found that effective change largely relates to an individual or small teams’ structural legitimacy within the health-care system and that high-level systems often unintentionally hindered meso- and macro-level improvements that staff wished to make. The current study is the first to identify which concrete macro issues at the level of the organisation are obstructing PE feedback being acted on.
A meta principle that can be drawn from the findings of this study is that the way in which participant experience data are being used in health care is not changing as fast as actors on the ground strive for it to change. For instance, there is already a recognition that too much data are being collected from patients in relation to the small amount of action that is taken as a result of it. 10,87 Our participants (particularly the management participants) were very mindful of this but largely seemed powerless to prevent the tsunami of ongoing data collection within their organisation. Equally, it has been known for some time that many members of ward staff find the interpretation of data sets difficult or impossible, as they have minimal or no training in analytics or QI. 87 This issue was raised by both management and ward staff participants in our study but there was no strategy in place or forthcoming at any of the three organisations we studied to address this issue. The slow movement of culture change discussed above is likely to be related to what has recently been dubbed the ‘uber-complexity’ of health care,93 with key actors working within a system that favours centralised power structures over localised individualistic solutions.
Recommendations for change
There should be an organisational emphasis placed on the principle that all feedback collected must have the ability to be meaningfully utilised by those providing front-line care. Otherwise, it becomes unethical to ask patients to provide feedback that will never be taken into account. An immediate concentration on quality over quantity is important, with a strategic focus that takes the priority off the collection of data and onto its utilisation. Secondly, ward staff need to understand the formalised sources of feedback that they are receiving from their patients before they can begin to use them. There are two approaches here, possibly complementary, but both are difficult to achieve within the current NHS climate. One is that significant work needs to be undertaken upstream by PE teams to relay the data to ward staff in an accessible, straightforward and engaging manner. Another would be for a proportion of ward staff to be given robust training in how to understand and act on the feedback that they receive from their patients. This should encompass both qualitative and quantitative analytical techniques and QI methodologies. One without the other is futile and allows staff to see only half the picture. The macro influences the micro here because if PE teams were not overwhelmed by a volume and multiplicity of data sources while simultaneously underprovided with analytical resource, then this could potentially be accomplished.
At the level of the organisational structure, teams that have been tasked with a narrow focus on individual sources of data (e.g. FFT, PALS, complaints) should be working more closely together, with a strategic emphasis placed on learning across the organisation from the variety of feedback sources. This does not necessarily require extra resource but a firm commitment to different ways of working that aim to understand the big picture instead of paying attention to the treadmill of targets and metrics per individual data source. If, as the data indicate, a significant number of senior health-care leaders believe in the importance of PE feedback, then it is important that PE feedback is valued as a priority and effectively incorporated into management agendas and practice.
Strengths and limitations
To our knowledge, this is the first study that has paid significant attention to the system-level macro factors that are inhibiting the use of PE feedback. Other commentators10,11,37,87 have noted some of the micro-level factors that we have identified here but not how they interact with structural issues, which further compounds the issue at hand. A limitation may be the way that we have chosen to concentrate explicitly on the problems surrounding the use of PE feedback, owing to the emphasis that participants themselves placed on this aspect. It could be that a more traditional barriers and levers style write-up (seeking to pay equal attention to problems and solutions) may have uncovered different, or more worthwhile, suggestions for change.
Conclusion
Our study found that the use of PE feedback is impeded by issues that pertain to both macro-level structural/organisational factors and micro-level factors surrounding how individuals interact with the data sources. These factors collide to create a situation where an ever increasing amount and diversity of feedback is being collected, but simultaneously staff at different levels in the hospital hierarchy are struggling to use it to make improvements to patient care. Given the current rhetoric around the importance of paying attention to PE, it is likely that current ways of work are not responsive enough compared with how staff say that they want to use patient feedback. We put forward recommendations for change that focus on quality over quantity, working towards ensuring that ward staff can understand the data that they are receiving and changes to organisational structure.
Chapter 4 Co-design process
This chapter discusses co-designing and prototyping the toolkit.
Introduction
This chapter outlines the process and thinking behind the co-design and development of a PET. There are two distinct design phases described here. The first was a co-design phase that generated prototype v1. This was tested in-context-in-practice through an immersive AR phase. The second phase involved a series of prototype iterations interspersed with AR cycles to develop and refine the toolkit to be implemented in ward contexts. Core parts of the final version of the toolkit can be viewed online. 94
Prior to this design work, a large body of research had been undertaken to understand what PE measures are currently collected, collated and used to inform service improvement and care delivery (see Chapters 2 and 3). Therefore, a key aim of the co-design was to add to these findings by facilitating a dialogue with stakeholder representatives specific to this project. This took place through three participatory workshops scaffolding a co-design process through an iterative and creative process of discovery, definition and development20 to lead to the development of a toolkit prototype. These sessions are outlined in detail later in this chapter.
Information gathered from the workshops to support earlier findings included:
-
revealing current experiences and attitudes towards PE data
-
discussion and identification of the range of forms of patient feedback data used in different health-care settings
-
consideration of appropriate ways to receive, communicate and share patient feedback data
-
discussion of possible strategies and actions about how to act on PE data
-
consideration of ways to recognise and record actions undertaken in response to these data.
The subsequent iterative process was used to modify the toolkit based on accumulated learning from contextual use (through AR cycles). This process saw four further prototype iterations being developed over the course of 7 months. These sequentially explored ways in which the toolkit might better function in context-in-use; they bridged the gap to the ‘messy’ reality of the context and practice in which ‘evidence’ is used.
In this chapter, we will:
-
outline background literature on co-design, prototyping and iterative design cycles
-
detail the co-design phase with specific focus on the design, delivery and outputs of the co-design workshops
-
specify development of prototype v1
-
detail feedback from AR phase one using prototype v1
-
describe design iteration process and related prototype variant nomenclature
-
reflect on the process and product, drawing out relevant learning for future co-design initiatives in similar health-care research contexts.
Background
Co-design and co-designing
Co-design refers to the collective creativity of designers and people not trained in design working together in the design development process. 95 In a co-design scenario, there is a duality of roles for all participants. The designers are designers in the process, contributing their professional expertise to process and ‘product’, and they are facilitators, guiding participants through the design activities. In this case, the designers are also researchers. The co-design participants are simultaneously ‘client’ (or end users), stakeholders within the context of use and ‘junior’ designers taking on some of the creative process. This specific project included the additional roles of co-design participants who were action researchers and health services researchers.
There has been an increase in design methods and approaches within health-care development and research. UK organisations such as NESTA96 and the Design Council97,98 have produced reports that discuss the need for, and benefits of, using design methods and involving patients and wider stakeholders in the development of public services. In their 2004 RED paper for the Design Council, Cottam and Leadbeater98 discuss using design to help transform public services, citing chronic disease as a case study. In this report they argue for an end to top-down approaches and an embrace of co-design. 98
With the NHS calling for new empowered patient relationships, and design theorists such as Manzini99 and others100 believing that design is important in shaping new societal futures, the design profession appears well placed to answer this call. Furthermore, it is indicated that these ‘Design-led participatory approaches, help the NHS think differently’. 97 Muratovski101 discusses the changing face of design, the ever-complex problems that designers face and the movement away from ‘product creation’ to ‘process creation’. This can be seen within most of the papers within the Chamberlain et al. 100 review, which use user-centred, participatory and co-design methods and cite the use of designed physical artefacts and health informatics as key outputs. The diverse range of design-led activities and responses found within the review include exhibition, environment and service design solutions. This encouragingly supports the potential of design-led activities in the health-care environment with the potential to play an active role in health agendas and within interdisciplinary research teams.
Prototypes and prototyping
The design process is based on experiential learning cycles. In these, participants in the process (usually designers) create real ‘things’ that enable people to respond, interact and react. This creates new knowledge and understanding about a specific ‘problem’, ‘context’ and proposed ‘solution’. For a designer, the process of drawing or making something is not always to transcribe ideas from their heads, but as a means of orchestrating a conversation with themselves and others. Externalising emergent thoughts and making them tangible allows designers to extend their thinking, distributing it between conception and perception simultaneously. 102 When others are invited into this ‘conversation’, the materiality of drawings or prototypes makes it easier to share and develop knowledge in a common language, unbounded by barriers between disciplines or hierarchies. The mock-ups and prototypes created play a vital role in facilitating the conversation between co-design participants as well as between the co-design group and others beyond the co-design group. The process of making, either collaboratively or as an independent enquiry, elicits deeper forms of knowledge such as tacit, behavioural or experiential knowledge. The result can elevate research findings into meaningful, impactful outcomes that are sensitive to the ‘messy’ reality within which they hope to contribute.
These externalised tangible ‘things’ are called prototypes. In the health-care innovation landscape the word ‘prototype’ is sometimes applied to PDSA cycles of improvement. In this context, they are usually planned quite carefully to be carried out at a specific point in the future. This differs to their use in the design world where a designer spontaneously sketches or ‘mocks-up’ models using materials available within the immediate vicinity, ‘exploring’ an idea and how it might become real.
Design iterations and cycles
When mapping out a design process to develop a specific idea, strategic thinking is applied to a programme of successive prototypes that aim to build the designer’s knowledge about what will work for the target user(s) in context, and what will not. Over a programme of prototypes, these intend to explore what is technologically feasible, acceptable and desirable to the user, or economically viable (in production, set-up and running costs). For example, some prototypes may test specific functional features of the proposed solution whereas others may explore form and aesthetic appeal. Additional functions that prototypes might fulfil are to ‘challenge’, ‘provoke’ or ‘catalyse’ in relation to assumptions or perhaps imagination. Successive prototypes, the cycles of gathering feedback about them and designing responses to the feedback, are termed design iterations or design cycles.
Overall co-design workshop and structure
As previously mentioned, the PET was developed through three workshops using participative co-design methods as a way of engaging a variety of stakeholders. 20 Representatives from six wards from three NHS trusts and a group of six patient/public representatives volunteered to take part in the three workshops. Members of the research team (who did the initial scoping research) also participated in the co-design.
This is a summary of the roles and responsibilities in the co-design phase:
-
Participatory designer – to bring participatory design techniques to the process of co-design and to encourage the AR group to think as creatively and innovatively as possible while being focused on achieving a practical outcome.
-
Improvement scientist – to ensure that improvement methodology is central to the design of the toolkit by participating in the co-design workshops. To then provide technical support to teams enacting these aspects of the toolkit over the 12-month period.
-
Action researcher – to form and facilitate an AR hub to co-design, implement and develop a PET through iterative cycles of action and reflection.
-
Evaluation fellow – to subsequently conduct an independent, mixed-methods evaluation to reveal how wards use the PET. Their task included managing a PE survey to monitor PE in each of the wards over the period of the PET implementation.
In addition, core workshop participants consisted of:
-
Hospital staff – who would take part in the co-design activities and implement the prototype PET on each of their wards.
-
Patient representatives – who would take part in the co-design and advise on implementation of the prototype PET on each of the participating wards.
In total, there were between 20 and 30 people present at each of the three workshops: approximately 20 hospital staff and patient representative participants and around 10 researchers and design researcher facilitators. The latter took an active part in the workshop when not undertaking research collection or facilitation activities. Ideally, the same participants attended each of the workshops; however, there was some variation because of work commitments, which led to some participants missing one of the workshops. In some cases, replacement representatives were sent. Workshops lasted for around 3 hours. They took place in three different locations in Yorkshire (Harrogate, Leeds and Bradford) between September and January 2016.
The focus of each workshop was constructed around the following:
-
Workshop 1 – identify and problematise the current experience of patient feedback data with workshop participants (primarily through the LEGO Serious Play method103).
-
Workshop 2 – responses and forms (working with information collected in the first workshop, several ideas and concepts were re-presented back to the workshop participants to help decide on options for further development).
-
Interworkshop development meeting – the research team met to discuss findings from workshops 1 and 2 in order to assist the design team in the development of a prototype for testing in workshop 3.
-
Workshop 3 – testing. Workshop 3 asked people to work through a prototype version of the PET based on the finding of the first two workshops. This had been analysed by the research team, including improvement scientists and designers to produce the first testable toolkit prototype.
As well as the formal interworkshop development meeting, continued discussions involving the design researchers and other members of the research team took place between each workshop to analyse and reflect on the process as it developed. These discussions were used to formulate the structure and development of design facilitation materials to be used by the participants. Continued points of discussion included:
-
How to make the toolkit adaptable and dynamic to suit the needs of different stakeholders?
-
To what extent do we utilise digital technologies?
-
How to respond to different feedback types – numbers and stories?
-
How to differentiate and value negative and positive drivers? (How does the way in which feedback and how this is presented/displayed/consumed impact on morale of the service providers?)
A description of workshop 1 (Bradford)
One of the biggest challenges in a co-design process is how to engage people effectively. If this fundamental aspect is not addressed appropriately, then issues of dominance, hierarchies, not accessing genuine experiential knowledge, or tacit professional knowledge are more likely to distort the process. One of the inherent features of co-design is the ‘production’ of something: this production can also refer to the creation or the sharing of knowledge. However, knowledge is an intangible concept that can mean different things to various stakeholders participating in the same process. Using co-design practices to problematise what we mean by knowledge and the sharing of knowledge can be a useful activity.
In workshop 1, the ‘LEGO Serious Play’ methodology103 was used to stimulate dialogue about the participants’ experience and expectations of patient feedback through the medium of 3D model-making. LEGO Serious Play methodology is used to enable participants to access subconscious and tacit knowledge, surfacing awareness of the way one has done things automatically in the past that was not explicit. It also helps to make ideas tangible, which can reduce assumptions. 104 The ‘LEGO Serious Play’ process gives all participants the space to make something physical, then talk about what they have made and why. This inherently gives all participants a platform to contribute and allows them to make their contribution with respect to the thing that they have created. Using the artefact as the vehicle for discussion and a form of media can protect people’s contribution from criticism by others and can help to level hierarchies and empower people. LEGO and the LEGO Serious Play Methodology was used to facilitate creative/participatory ways of involving people in the toolkit research and improvement work (Figure 1).
Workshop 1 outline
The aim of the first workshop was to identify and explore the current experience of patient feedback data with workshop participants. At the end of this workshop we hoped to have further insight into:
-
What forms ward staff are used to receiving patient feedback data in and what forms would they like the feedback in? (Where and when would they like to receive it? How often? etc.)
-
How to get from data to action?
-
How might action be recognised and recorded/celebrated?
The workshop was conducted according to the following sequence of activities:
-
Introduction to the project and research to date – a research team summary of the project to remind the participants of the aims behind the workshops.
-
What is a co-design experience? – scene setting to the workshops and the co-design concept.
-
LEGO Serious Play – a brief introduction to LEGO Serious Play and its use in this context.
Participants were divided into table groups of six or seven people to form a mix of trust sites and roles and they were asked to undertake the following introductory exercises:
-
Skills building – getting all the participants familiar with using the bricks and building to create metaphors. This also introduced the idea of full participation from everyone.
-
Individual model building –
-
Team 1: to focus on the what/when/where and how of PE data.
-
Team 2: to focus on how to get from data to action.
-
Team 3: to focus on recording and recognising the impact of PE data.
-
-
Landscaping – models were placed in groups to look for themes and links to form the basis of areas for ideas to move forward.
-
Recording and sharing – participants used word cards (placed on the models) and cameras to record the models and what they mean. Feedback was given to the table by individual participants, then summarised table by table to the rest of the group.
After the completion of the first workshop, the outputs from the workshops were examined by the project researchers and the research design team alongside the documentation about how the groups got on with the activities (taken on the day, in note form). The findings were analysed to identify areas and patterns of concern and interest that could be used to frame the second workshop. Responses to the questions posed to each of the three teams were summarised in the following diagrams (Figures 2a, 2b and 2c). These were shared with the participants at the start of workshop 2.
A description of workshop 2 (Leeds)
As mentioned, an analysis of the information collected from workshop 1 was used to structure the content and exploration of workshop 2. In this second workshop, we examined how three identified common areas of interest could be used to begin to think about both simple and innovative ideas for effectively using the patient feedback resources to hand. The three common areas that formed the basis of the enquiry in workshop 2 were:
-
the different types of data available (data forms and uses)
-
the people who use and create PE data (people and relationships)
-
the places in which this data are used (environments and timing).
Workshop 2 outline
This began with a brief recap and a sharing of the findings from workshop 1. The workshop 2 exercises were then introduced. These exercises were used to give the participants the opportunity to explore each of the areas identified. Dividing the participants into three groups, we took turns to investigate each of the themes, collecting information through a series of participatory research design exercises. All the participants were given the opportunity to rotate and contribute to each theme. To make the conversation context-relevant we suggested that on this occasion people remain in their own trust groups (patient representatives were given the opportunity to choose to stay with a particular theme or to move round with a trust). Different activities were set up on three tables and groups rotated after 40 minutes. The activities are described below.
Table A theme: thinking about different data forms and their uses
On this table, two activities were designed to generate a conversation and to collect feedback on people’s experience with existing forms of patient feedback and typical uses for this on their ward.
Using a template, participants were asked to make a list of different types of PE data/information that people have seen or used and write alongside this any positives of having access to PE data in this form (Figure 3). Participants were initially asked to work in pairs, before their responses were collated onto one list per table; 15 minutes were allotted for the exercise.
In this exercise participants were asked to think about different PE data formats and consider what new ways this information could be used on the ward through both formal and informal means. The participants were given blank pre-printed card templates and stickers of different data forms that represented different types of patient feedback (Figure 4). A new card was completed for each type of feedback form identified. After selecting a feedback form participants were asked how the feedback could be used, what effect this might have and what resources might be needed to implement the use of the data. Along with instructions, the following notes were printed on each card to help guide the participants:
Think about the form – How it arrives on the ward and is shared with people? – How people might use it? – When – where, etc.?
How might it be adapted? Might it work better on a screen – on a mobile app – in a folder – a conversation?
Is it a quick win or will it need a change in culture?
There were 25 minutes allotted to the exercise.
Table B theme: thinking about people and relationships
The focus of this theme was to think about how we could encourage people to better record, communicate and make use of PE data. Again, two exercises were designed to facilitate this conversation and collect information.
In this first task, participants were asked to make a list of all the different people’s roles that they encounter in a typical day. Then they were asked to add a list of the positive aspects of how people currently work in and inhabit the ward (Figure 5). Again, participants were initially asked to work in pairs, and then collate examples onto one list per table. There were 15 minutes allotted to the exercise.
In exercise 2, we asked participants to think about people and relationships, and to consider in what new ways PE data/information might be shared, acted on, evidenced, recognised and celebrated on the ward. Using cards and stickers representing a cross-section of service provider and service user communities, participants were asked to choose a person/role and fill in a blank card with an idea of how they might share PE data, what this sharing might achieve and what resources might be needed to make this happen. Participants were encouraged to think of different types of interactions and roles; as an individual, one person to another, one person to a group, as a group, etc. (Figure 6). Along with instructions, the following notes were printed on each card to help guide the participants:
Think about the relationships between people? Times when people meet? Different times of the day? Different roles and responsibilities? Different types of interaction between people? Is it a quick win or will it need a change in culture?
There were 25 minutes allotted to the exercise.
In these exercises, participants were asked to think about how we could make our physical environment more responsive to PE data to help promote ownership of PE data by staff and to celebrate its effective use. Participants were encouraged to reflect on the use of existing spaces and activities: handovers, ward meetings, displays, noticeboards, suggestion boxes, etc.
In this first task, participants were asked to make a list of all the different spaces they move through/use in a typical working day, then add a list of the qualities of the spaces identified (Figure 7). Participants were initially asked to work in pairs before collating examples onto one list per table. There were 15 minutes allotted to this exercise.
In this task, participants were given cue-card pictures of typical health-care spaces and asked to discuss and write ideas on how and where PE data might be shared, acted on, evidenced, recognised and celebrated in these spaces, as well as adding or suggesting any missing spaces (Figure 8). Participants were encouraged to think about where we currently see evidence of patient feedback in our environments and whether this feedback was from a locally generated or a formally collected source. The participants were also asked to consider if their ideas represented ‘quick wins’ or required some form of technological intervention and a change in culture. There were 25 minutes allotted to this exercise.
Three primary themes came out of the data from workshop 2: ‘Forms of data and their uses’, ‘People and relationships’ and ‘Environments’.
Interworkshop development
Between workshops 2 and 3, the project researchers (including designers and improvement science specialists) met to discuss how they could use the findings and activities of the first two workshops to inform the design of a prototype toolkit (PET) that could be tested with the participants in the third and final workshop, and then refined for implementation on the wards. This group discussion led to the development of a set of seven core principles that should underpin a prototype toolkit (PET) and seven key steps that should comprise its content. These are outlined in Table 3.
Principles | Steps |
---|---|
PE is everyone’s business and each person has permission to enhance it | Step 1: form a PE team who can address the following questions: |
PE feedback comes in many types, including formal and informal | Step 2: what is PE currently like on your ward? |
Celebration of positive PE is as important as action to improve negative PE | Step 3: are staff views about positive and negative patient experience shared by patients/relatives? |
It is important to communicate any activity to enhance PE with as many staff and patients as possible | Step 4: can you identify what and who currently contribute/s to these negative and positive patient experiences? |
Any activity to enhance PE needs the patient voice firmly at the centre | Step 5: what are you going to do to? |
Enhancing PE requires hearing the voices of all patients, especially those who are most vulnerable, and those less likely to be heard | Step 6: how will you know what has changed as a result of your efforts? |
Timeliness: some PE feedback/data are more relevant to informing improvements than others and this will depend on when/where it has been gathered | Step 7: how you will share your progress? |
Designers then turned these principles and steps into an initial design for a paper-based workbook structured as a set of activities, information and resources to support a staged group/team experience. Central to this toolkit prototype was a PE improvement flow chart that indicated how activities linked together.
Following the collaborative exercises designed for workshop 2 (described in Workshop 2 outline), an initial design for a paper-based workbook with instructions and links to reference materials was devised. This first iteration consisted of exercises that were designed to take participants through the following:
-
planning a team
-
a set of guiding principles that facilitate working with PE feedback
-
thinking about a positive example of patient feedback – that works well
-
thinking about a negative example of patient feedback – that needs improvement
-
action planning and review of PE feedback (recognising and celebrating action).
First version of the toolkit tested and critiqued by workshop participants
In workshop 3, participant groups were asked to work through and evaluate this first draft prototype version of the PET, providing feedback to help develop a version that could be tested in wards. The toolkit prototype was designed in the form of an A4 ring binder with removable pages that could be taken out to facilitate group working. This allowed for the printing and compiling of a small number of copies to be tested in the workshop and it enabled suggestions and alterations to be easily added after the workshop.
A description of workshop 3 (Harrogate)
After scene-setting and introductory talks, the participants were introduced to the principal activity of the day, which was to work through the first toolkit prototype. A number of copies of the toolkit folders were handed out, which allowed small teams from each trust to work through the material. There were 40 minutes allotted to this exercise.
The participants were told that the prototype was informed by ideas and suggestions from the first two workshops and their subsequent evaluations. They were asked to complete the toolkit by working through different activities and considering the questions in context of their own wards or environments. As they worked through the toolkit, they were asked to make notes on the processes: how easy or useful or difficult they found the activities within it (using a different coloured pen or post-it notes to differentiate between comments and the completion of the activity). Groups also gave suggestions for resources that they felt might help the completion. Members of the research team were on hand throughout the workshop, to observe and assist in the collection of feedback.
After working through the toolkit, the participants were asked to consider the experience and give feedback on the following questions. Feedback was either written on the worked through copy of the toolkit or collected by project researchers.
Feedback questions and prompts included:
-
How could they see this working in your setting?
-
How would they capture the activity – summary? As a team and to share?
-
Were they able to follow the steps and activities?
-
Do the steps and questions make sense?
-
Did they like the form – did they have any suggestions about another way of presenting the material?
-
Is it similar to other PE feedback activities they have done in the past?
-
Are there any missing sections – (changes in the order) or materials such as communication materials required?
Reflections on workshop 3
The completed activities and comments from workshop 3 were collected and reflected on by the research steering group, and a number of ideas were added to the toolkit, taking on board the suggestions made in the design of the second toolkit prototype that was to be tested in the wards (and known as version one).
These suggestions included:
-
a more meaningful cover design with a list of different forms of feedback to set the scene
-
a contents page
-
colour-coded sections (will reflect this in the contents page)
-
a process flow chart that takes you through how the toolkit works
-
a more detailed section on how to build your team to engage with the toolkit
-
moving the Guiding Principles section to after the team-building exercise
-
the separation of the positive and negative feedback issues into two exercises
-
introducing a separate ‘plan and review’ section to look more specifically at service improvement strategies
-
expansion of the resources section (which sits at the back of the toolkit) to separate pages that can be added to.
Further reflections on designing and working with the toolkit that needed to be considered during the implementation phase included:
-
Making it clearer that there is time/activity required between the plan and review sections (a summary activity and then reflecting on intervention activities).
-
The suggestion that a calendar could be added so that people could fill in how they will engage with and implement actions/ideas that have come out of the toolkit activities.
-
The time that staff would have to use the toolkit.
-
Support available to interpret and share feedback data.
-
There should be more links to online resources.
Prototype phase: plan for using the toolkit in the ward context
The suggestions and changes from workshop 3 were incorporated into version one of the prototype. This version was used as the first PET for the AR cycles following the co-design workshops.
It was believed that a main part of the toolkit experience would involve the health-care teams working through and completing the toolkit activities in a team exercise (Figure 9) and it was initially intended that the toolkit would be explored through a series of group sessions. It was envisaged that the toolkit could be worked through over four or five sessions of 30 to 60 minutes. It was suggested that there should be eight participants (minimum four) in each session who would be broadly representative of the ward community. These sessions should be scheduled to run ideally every 2 weeks. Concerns about whether or not staff would have time to use the toolkit back on the wards and about how much support might be needed to interpret or share feedback had previously been raised.
The sessions were supported and facilitated through a set of resources made up of the toolkit (demonstrated in Figure 10), subject information, participant activities and awareness materials. Importantly, project researchers would facilitate the sessions/meetings in the initial implementation. It should be noted that these parameters were set up as provisional guidelines only and the realities of implementing the toolkit in the various wards influenced this suggested model dramatically. The implementation process is discussed in detail in Chapter 5.
Feedback: prototype v1, action research phase 1
As described, the toolkit is intended to provide a framework and process guidance to enable ward teams to work with PE data. The aim is to drive changes and improvements aimed at improving patients’ experiences. Importantly, version one aimed to ‘walk’ the user through a process of using the PE data, understanding key messages from the data, identifying opportunities for improvements and then structuring a ‘project’ to make and test the proposed improvement.
The feedback from prototype version one, used in the first AR phase by the action researchers, highlighted several high-level tensions or conflicting interests and a number of secondary issues and cross-cutting themes. There was also one specific highlight noted: the single-side A4 sheet flow chart that outlined a summary of the process the toolkit was walking users through. This had been immensely useful to both the action researchers and the ward staff at numerous points throughout AR phase 1. Everyone felt that this was an incredibly useful feature of the existing content and it should be maintained and enhanced in subsequent prototypes.
The high-level tensions were:
-
The way that the toolkit was used as a document versus a facilitated and peopled process.
-
The depth of information in the toolkit. The tension between overall guidance versus specific detail.
-
The ward teams had a focus on ‘current patients’, whereas the majority of PE data were from ‘previous patients’.
-
Ownership – bottom-up versus top down. Who owns the process and who owns the toolkit? A ward leader, a ward team or a central support department (improvement team or PE team)?
-
‘Idealisation’ of toolkit use in context; the ward staff were taken off the ward for the initial toolkit development. When this was tried out on the ward, it was realised that there were elements that would never be used on the wards.
Secondary or cross-cutting themes were:
-
‘Skills gap’ – there is a ‘gap’ in the skill base of the ward teams that is essential to utilise PE data, regardless of this toolkit.
-
Ward capacity – there is limited ward capacity and competing priorities for staff to do the basic care and nursing essentials in their role profile.
-
Mindset – the toolkit needs to impart a positive mindset to improvement as a ‘core activity’. This includes being open-minded to feedback from patients, reflective, considering small and achievable changes and seeing improvement as a continuous process.
-
Appearance – the finished product needs to ‘sell’ it, look ‘easy-to-use’ and not ‘burdensome’; for example, the physical size (i.e. being too big) was seen as a barrier to engagement.
-
Format – needs to be ‘punchier’ and bring the topic and process ‘alive’; the folder would be a useful repository for shared learning and resources ‘collected’ or ‘developed’ along the way.
-
Content – not overwhelming, user specific (multiple different users?).
Iterative prototyping process
Based on the feedback summarised in Feedback: prototype v1, action research phase 1, a design process was outlined that sought to address or query, perhaps even challenge, the high-level tensions and secondary themes. The resulting ‘steps’ or iterations in the process are shown in Figure 11, which shows an illustration of the various prototypes.
The first step was to strip out the content completely, keeping only the ‘backbone’ of the flow chart. Various improvements on the flow chart were explored to address the issues relating to ‘Appearance’ and ‘Format’. Content was then gradually reintroduced throughout subsequent prototypes, in a tiered form to explore what content was most important and who it would be useful to. The next stage was to purposively explore issues relating to who the end user(s) might be and who might ‘own’ or control the process. The next prototype explored issues relating to the ‘Skills gap’, ‘Ward capacity’ and ‘Mindset’. The final two prototypes explored issues relating to content.
A detailed description of the design iterations and related prototypes (v2–v5) is included in Report Supplementary Material 1. This includes a description of feedback from each preceding AR phase, changes between previous prototypes and ‘current’ prototype, what the ‘current’ prototype is seeking to establish and how it was presented to ‘users’ in the subsequent AR phase. The overall process is shown (see Report Supplementary Material 1).
The final prototype (vFinal) following all design iterations and feedback is shown in Figure 12. This version embodies a toolkit that will be used by a facilitator to assist ward teams in going through a process of collecting and using PE data to make ward changes that improve their patients’ experiences. It includes a refined flow chart of a six-phase process that a facilitator will lead them through. Each phase is structured into a three-tier hierarchy of detail such that the facilitator can share more or less detail with the ward staff depending on the level of understanding sought by the ward staff.
It is suggested that toolkit variations that could be requested, ordered or downloaded included:
-
full toolkit and resources bound together
-
toolkit and resource packs bound separately (with or without external folder)
-
flow chart on its own (in various sizes)
-
‘high-level’ detail for ward teams (either in a pack or unbound)
-
– flow chart and ‘Russian doll’ level 1 for each phase
-
– full toolkit on its own (bound) without resources.
-
Reflections from the designers
Through the co-design process a tool was created to enable understanding of patient feedback issues and form a response in a given care context. Based on ideas and common themes that emerged through the participant activities in the first two workshops, a prototype (v1) was devised. This was to test how we might develop a flexible framework that would allow people to formulate a tailored strategy for thinking about and addressing patient feedback in their context. Moreover, through the workshop activities and reflecting on these activities, we could progress from a blank sheet of paper (not knowing what shape or form the PET toolkit would be) to having a workable model for testing in the wards based on contributions from workshop participants who were representative of the community. Because of these contributions, we included a strong interactive-/activity-based element to the toolkit in an attempt to be less prescriptive and more agile. This was the flexibility that would allow the toolkit to accommodate the wide variety of contexts of use. It was hoped that participants from the sessions would become ‘patient experience ambassadors’ and help to raise awareness of the use and importance of patient feedback information across the ward communities.
Initial observations from the co-design/co-research process influenced how the toolkit evolved in terms of its form and implementation. The first use of the toolkit in workshop 3 did not reflect the real-world use that it would be subject to in a ward environment. Issues around planning teams to include different levels of nursing staff, time pressures and roster considerations and the need to develop trust among the team to work speedily in pressurised environments have all become apparent. Patient representatives stressed the need for flexibility and versatility in both applying their skills and their availability. More concretely, it became apparent that an approach that utilised patient representatives to collect feedback might be advantageous. The use of open-ended questions helped to provide a fresh perspective of care experience, but the co-design process identified a need for resources to manage data, to help create feedback and to stimulate ideas. From the initial ward implementation, two examples of issues that were identified from patient feedback were daily communications with patients and relatives about the care experience, and problems with boredom and loneliness of patients on a ward. It was also identified that there was a need to devise more effective ways of presenting feedback on what was being done by ward teams to help to activate thinking differently, such as displays of feedback-into-action for patients and relatives or the creation of certificates to recognise excellent care.
Product
As a product, this toolkit has to work across a variety of contexts in acute and community health-care settings. There are also significant variations in the ‘staff’ users. The final design has been robustly developed with a variety of potential users and contexts. Full evaluation still remains to determine its impact, ease of use and its transferability to other contexts not involved in the development. As it stands, it is not without its limitations. It requires a skilled and trained facilitator with specific knowledge about eliciting, collecting, collating and interrogating PE data to present such data back to ward staff and assist them in identifying relevant themes for improvement. This person also facilitates a ‘design’ process of generating ideas for improvement, testing them and scaling them up.
However, initial signs and early feedback from the ward staff, patient representatives and QI staff that were involved in the project is extremely positive and the early evaluation signs indicate some improvements. Attributing these specifically to the toolkit (as opposed, for example, to the increased staff awareness arising from involvement in the project) is harder to say.
Process
One design method currently being applied in health care is the EBCD methodology. 78 It was first used in 2005, by HCPs who looked to design in response to creating a more patient-centred NHS. 105 Co-design methodologies were used to develop a replicable free online toolkit. However, this case study is an example of disciplines outside design using design methods without including designers in the activity. This has led to incomplete use of EBCD where process steps that are perceived to be too challenging or unfamiliar are omitted. Because of this, EBCD is often criticised by designers106 for its limited tangible service improvement and its lack of ideation tools, and is often described as ‘design like’ rather than designerly. Challenges and questions have been raised about the level of innovation actually achieved.
The approach used in this work to design and co-design the toolkit was both novel and challenging in a positive way. Starting with a co-design process led by designers to develop the first prototype, it was followed by an immersive, AR phase of testing and developing the initial prototype before closing with four iterative cycles of prototype evolution and ‘user’ feedback; this process has never, to our knowledge, been utilised before. In the final implementation of a health-care intervention, context is often influential in determining whether or not interventions are successfully and sustainably adopted and used. This means that interventions need to be context-sensitive and adaptable. The approach described here was purposively deployed to enable the research and design team to respond to the wide range of context variations described above and it enabled them to design and develop a toolkit that would work across a number of these contexts.
The co-design approach used to generate the initial prototype, working with staff from six different ward settings across three different hospitals, utilised designerly methods of ‘making’ to enable all participants to contribute equally. We used LEGO Serious Play as the underpinning protocol and LEGO as the media for making as it has, in our previous experiences, proven to be an accessible approach and medium to deal with complex topics like this. This inclusion and attention to equity ensured that we elicited details about variations across contexts and, to an extent, these were catered for in prototype v1. The workshops that this co-design phase was structured on were appropriately conducted in a neutral venue, taking the ward teams, patient representatives, service improvement staff, researchers and designers away from their usual contexts of work.
The immersive AR phase tested v1 in different ward contexts and, through the experiential knowledge co-created with the ward teams, insights about the v1 toolkit were generated, challenging some of the assumptions that had been made by the co-design participants in the workshops.
The final iterative prototyping phase was challenging in that it was often difficult to get representatives of the ward staff and patient and public involvement (PPI) representatives together with the researchers and designers to explore the practical and perceptual added value of proposed design changes. These dialogues were often facilitated by the action researchers, using physical prototypes to illustrate various design modifications and alternatives. This led us to consider the notion of boundary objects with regard to the prototypes and further, with regard to the action researchers.
Susan Leigh Star107 is credited with proposing and defining boundary objects, describing them as objects ‘which both inhabit several intersecting social worlds and satisfy informational requirements of each’. She went on to suggest that boundary objects were vague, had strong cohesive properties and were flexible and recognisable across cultures. Henderson108 paraphrases this to describe boundary objects as agents that socially organise distributed cognition.
It is useful to clarify the scope of what an ‘object’ is. In the context of boundary objects, an object may or may not be a physical artefact or thing. It could be a computer program, a space, a theory, a drawing or even a person. The object is something people (or, in computer science, other objects and programs) act towards and with. 109 The notion of boundary objects has been studied further with specific reference to ‘products’ and to issues of knowledge ‘translation’ or ‘transformation’. 110,111
In the context described in this chapter, the discussion about what a boundary object is, is ongoing. For example, in this project the boundary object is classed as the PET prototypes. However, it could be considered that the boundary object expanded beyond this to include the action researchers; or perhaps the action researchers became a part of each prototype. The specific definition of what the boundary objects were in this project is not necessarily important. What is important is that in the field of health-care innovation, in which it is often difficult to host face-to-face co-design workshops and in which it is so important to consider context, boundary objects become a useful concept to consider prototypes and perhaps people or spaces that help to transfer the prototypes between stakeholder groups.
This process raised several interesting questions for the design researchers:
-
For the nurses and midwives involved in the co-design phase – did taking them off their wards and out of their context of professional practice for those initial co-design workshops make them think about their practice in an idealised way, building in design features to the toolkit based on assumptions of how it would be used in idealised practice?
-
Did the immersive AR phase ‘uncover’ these idealised assumptions?
-
The challenge of enabling ward staff to participate in face-to-face co-design activities is increasing as pressures on their time grow. Did the immersive AR phase and the subsequent physical prototype iterations work sufficiently as boundary objects to enable ward staff to engage sufficiently in the later stages of the design process?
-
The transition of the toolkit end user from ward staff to facilitators was based on an emergent realisation (from the first AR phase) that there was a knowledge and skills gap in the ward teams that was vital to following a process that utilised PE data. A question arises for us about an unintentional bias that may or may not have been introduced to this transition by the action researchers who gradually found themselves acting as facilitators of the toolkit in the first AR phase to ensure that projects got started.
-
During the prototyping phase, the action researchers initially began by facilitating the sharing of the prototypes with the target users in their context of use. However, their role gradually shifted to one of primary user. This was because of a recognition that, no matter how the toolkit was designed, there was a gap in ward team ability, capacity or capability to use the toolkit: a gap that the action researchers could fill. When this was recognised, and a toolkit ‘facilitator’ was accepted as the new target user, the designers began designing with, and for, the action researchers as representatives of facilitators who would use the toolkit with the ward staff in their context. This toolkit facilitator role was being prototyped at the same time as the toolkit was. How much these were two distinct prototypes or actually one emergent prototype, and whether or not this made one boundary object or two interwoven boundary objects, remains a point of intense discussion within the team.
Conclusions and recommendations
Co-design, co-creation and co-production are potentially valuable processes to address some of the translational issues of health services research interventions;112 yet there are significant challenges to conducting such processes with ‘front-line’ staff and patient representatives. Using creative design approaches to elicit the experiences, ideas and preferences of these diverse stakeholders in these processes has proven to be a very powerful way of empowering stakeholders. The use of visual and physical media as a form of sharing and communicating helps to remove barriers to mutual understanding. Yet, issues such as time poverty are a growing challenge to being able release staff from their duties in order to engage in such co-design processes. This means that co-design processes have to evolve and adapt, exploring the potential of boundary spaces, boundary objects and ways of communicating and sharing with staff that enable them to meaningfully engage in co-design processes without ‘costing’ so much of their time. Additionally, such adaptions may enable staff to contribute from their context of practice. This situated engagement may elicit more ‘realistic’ reflections as opposed to the potential for more idealised reflections of practice that may occur when staff come out of their workplace into neutral settings and then reflect on what they do at work.
Throughout this co-design and prototyping process, the value of the tangible artefact (the prototypes, products and outcomes of the co-design activities and the design process) with respect to KMb cannot be understated. The value of these artefacts goes beyond the notion of boundary objects that serve to enable knowledge to cross boundaries. It potentially goes to the heart of the translational issues in terms of both refining knowledge for practical use-in-context and activating knowledge within individuals: the designers, the action researchers and the co-design participants. Although not evaluated through a KMb lens, the designers and researchers are confident that the process of making these artefacts (collectively and for the designers independently) allowed existing knowledge to be recognised and valued, new knowledge to be created, and knowledge to be communicated, shared and refined based on practical implications related to context and use. We believe that this is an important lens through which to view and to consider co-design within health-care contexts, so as to appreciate its additional value alongside health sciences.
Chapter 5 Action research study
This chapter discusses how patient experience feedback can be used to make hospital improvements: building the theory of an intervention through action research with patients, improvement specialists, academics and health-care staff.
Introduction
As a result of the co-design workshops, a prototype intervention for using PE feedback in improvement, in hospital settings, had been developed. This had been given the working title of ‘Patient Experience Toolkit’ (PET prototype 1). In this chapter, we provide an account of an in situ (hospital-based) AR method that was used to test and refine the theoretical rigour of this intervention, with hospital staff and patient representatives as our co-researchers. Our approach was informed by Bate and Robert,113 where we (CM and RP) were ‘interlocuters’ and ‘activists’. As interlocuters, we carried key messages from the theoretical underpinning of PET prototype 1 to our co-researchers who had committed to test and refine this intervention in their hospital settings. We went beyond traditional theoretical themes but, as in Plsek et al. ,114 we devised heuristic statements to build a practical ‘model’ of the types of actions required (i.e. ‘if you want to achieve outcome Y in situation S, something like X might help’). 114 Therefore, we distilled eight heuristic statements that captured the theory and assumptions of the PET (prototype 1). These then guided the implementation of the PET, in which the statements were posited transparently, tested and refined in situ. As ‘activists’ we participated with the hospital staff and patient representatives as they tried out applications of the PET within their contexts, working to implement, reflect and adapt as required. This chapter first outlines and describes our original heuristic statements and then how the AR process led to a revision of these and, therefore, to a more rigorous theoretical basis for the intervention, concluding with a discussion of implications for policy and practice. The AR took place concurrently with further co-design processes (see Chapter 4) to revise and strengthen the PET as a product for potential wider use within hospitals.
Our original ‘heuristic statements’
Table 4 summarises each of our original heuristic statements, indicating how they were articulated in PET prototype 1. Further detail about each (including its location in existing theory) is then provided.
Title | Original heuristic statement/s | Articulation in PET (prototype 1) |
---|---|---|
1. Multidisciplinary approach | PE is affected by all staff types and everyone therefore has a role to play in it. A multidisciplinary core team is best placed to lead implementation of the PET, and others will need to be engaged periodically | Planning your team exercise: identifying core team and operational processes |
2. PPI | Patient involvement is essential to improving PE and representatives could bring an advocacy element | Guiding principles exercise: reflecting on PE is important to the team |
3. Collecting current patient feedback | It is sometimes useful to collect ‘current’ feedback from patients to complement other sources | Thinking about patient feedback exercise: new or additional feedback may be necessary to supplement what is already collected |
4. Triangulation of different feedback | There will be a lot of existing feedback in the system. Some is formal (e.g. surveys) and some is less formal (e.g. daily interactions) and teams need to use all sources together to inform and measure improvement | Thinking about patient feedback exercise and step: collating all existing feedback |
5. PE/QI team involvement | PE teams will be able to access trust data about a ward and may be able to escalate issues raised that are beyond ward remit | Planning your team exercise and step: developing links with a range of contacts within the trust |
6. Facilitating group reflection and planning | A facilitative reflective space will help staff consider their feedback, prioritise actions and measure/monitor impact | Reflective approach to team work implied within all exercises and steps |
7. Applying improvement methods and developing skills | Systematic cycles of improvement methods can support staff in prioritising actions, in measuring/monitoring impact and in developing longer-term skills in improvement methods | Review and plan exercises and step: testing and implementing PE improvement ideas |
8. Celebration | Celebrating good PE enhances staff and patient morale. This should be done at the end of any initiative to recognise a job well done | Celebration and communication and step: developing plans for sharing outcomes of PE work with managers and patients |
1. Multidisciplinary approach
The notion that PE should be not be the sole interest of any single health-care discipline is implied within definitions of PE (e.g. Wolf et al. 12) and the need for multidisciplinary approaches to its enhancement has also been noted. 115,116 It was felt that efforts should be made to establish a core team to lead the implementation of the PET, which was as multidisciplinary as possible, but that not everyone would need to be involved at all times, and some could be brought into the process as required. Among the co-design participants, this sense of an ideal was tempered by an awareness of potential barriers, significantly the perceived schism that exists between doctors and nurses. 117,118 There was an eagerness to use the AR to encounter, better understand and hopefully overcome the barriers encountered.
2. Patient and public involvement
Although involvement is widely valued and is indeed a policy imperative,4–6 it also presents many questions. There are numerous potential models of involving patients in improvement, such as Bate and Robert,78 and the extent has been articulated via a spectrum from high to low. 119 Co-design participants, including representatives of patients and the public who attended, shared broad agreement in the sentiment, alongside uncertainty about what precise model would be most suitable. Mirroring a common assertion,120 it was felt that different models may suit different clinical areas.
3. Collecting current patient feedback
Because of concern raised about the quality of feedback that gets routinely collected (see Chapter 3) and its lack of clear role within QI (see Chapter 2), it was clear that traditional sources of data may not provide everything that is required. Indeed, notions of insight121 and soft intelligence79 are emerging to broaden perceptions about what constitutes feedback for improvement: not just surveys, but more informal conversations with patients perhaps. 122 These proposals were tentatively made within the workshops amid uncertainty about how this would be possible and what this would look like.
4. Triangulation of different feedback
Our scoping review had revealed the many types of PE feedback available within hospital settings, principally the FFT, complaints and concerns (abbreviated in this chapter to PALS to reflect their commonly used name). Our qualitative work revealed that hospital staff are extremely keen to access and use whatever is available, more effectively than they currently do. To conceptualise how different sources could be used together, we were influenced by the notion of triangulation to aggregate and validate,13 and discussed this in co-design. It was clear that this would require a significant data management function that was generally lacking80 and that there would be other challenges; different types of data, with different purposes, timescales and capture mechanisms mean that triangulation and aggregation are difficult, and the process of standardising into comparable formats can lead to loss of the meaning that is found in more fine-grained, locally specific feedback. 10,66 Co-researchers from hospital teams expressed how they also knew things in less formal ways (e.g. daily conversations), and that these sources could in fact be where the most richness and nuance is contained. Amid this recognition of complexity and uncertainty, the ambition remained to use the AR to articulate what types of feedback were most useful and to illustrate how they can be used together.
5. Patient experience/QI team involvement
Our qualitative work revealed how the current task of data management currently fell to under-resourced PE teams, a theme that is recognised more widely. 80 An assertion made by some (e.g. Flott et al. 37) is that, if PE teams were better equipped for data management and interpretation, more appropriate data could be provided to ward teams to develop improvements. With respect to the involvement of QI teams, we had found that they were rarely involved, focusing often at the strategic organisational level. Our co-researchers wanted to explore further the potential data management functions of PE teams and the potential input of those who could support QI (see Applying improvement methods and developing skills).
6. Facilitating group reflection and planning
The engagement of ward-based teams with their feedback was seen as a key area for exploration. Previous studies have shown that participation and reflection are crucial engagement approaches and are essential in utilising feedback within practice. 22 As action researchers, we were sympathetic to participative and reflective approaches to involving teams in all parts of the PET process.
7. Applying improvement methods and developing skills
In order to visualise techniques to use PE feedback in order to make improvements, we brought QI specialists into the co-design workshops and referred to the Model for Improvement,123 built around the PDSA cycle. Hospital teams, to varying extents, appeared willing to try these. We were hopeful that they would help but we were also aware of the caution urged by some (e.g. Cohn124 and Pflueger125), with respect to the loss of meaning that can occur if you turn all care quality information (particularly those relational aspects of PE) into static metrics for QI. Our goal was, therefore, to develop our understanding of how to apply QI techniques to PE in meaningful ways.
8. Celebration
Review and celebration are integral to collaborative approaches to improvement, specifically the discussion of achievements and learning at the end. 30,78 Our co-researchers reiterated the need to hear positive and constructive feedback to negate poor morale and an incessant feeling of failure, so the notion that we would have things to celebrate by making changes to PE was embraced.
Methods
Figure 13 illustrates the study design for the AR and how it relates to the co-design process documented in Chapter 4. The figure shows how the AR built on the initial co-design workshops that saw the development of prototype 1, and then how the AR informed subsequent co-design steps to develop further prototypes. Here, we explain the key elements of this study design.
Participation
We were committed to the principle of participation and democracy that, as modelled by Reason and Bradbury,126 is central to AR. This meant seeking to involve as many as possible of those groups of people whom this research sought to serve and who had experiential knowledge of the area of study, as co-researchers. At the funding stage, senior staff from three hospital trusts had committed to allowing two clinical teams from each to take part, and the process of nominating which clinical teams (we refer to hereon as ‘wards’) had already taken place. We then ensured that all teams were represented at the co-design workshops so that they could take part in ‘owning’ the PET that their teams were to test and refine. A mix of different levels of nursing staff attended the workshops (matrons, ward managers, midwives, senior sisters, staff nurses and health-care assistants) and, at the final workshop, were formally invited to take part as co-researchers (‘practitioner co-researchers’ and ‘patient co-researchers’) in the AR. As well as ward staff, we invited those in relevant organisational roles (e.g. from PE teams) to the co-design and subsequent AR. We recognised that others from wards or from elsewhere in the organisation could join as the process developed and we wished to retain this flexibility to a changing context throughout.
We also invited members of the public with an interest in each of the hospitals to join in the co-design and then the AR. Although not currently in hospital, these people were viewed as potential patients who could advocate on behalf of current patients. In the co-design workshops, we linked one ‘patient’ with each ward, with a view to maintaining this relationship throughout the AR.
Finally, we invited co-researchers who could bring some form of generalised expertise and we called these ‘advisory co-researchers’. These were members of the wider research team and two improvement specialists. We maintained the title ‘action researchers’ for CM and RP as they performed the co-ordinating role. Table 5 provides a summary of all co-researchers (n = 64) who worked alongside CM and RP over the course of the AR. Our expectations of the levels of collaboration from each of our co-researchers remained flexible and pragmatic, balancing our desire for participation with our respect for their time constraints. 127
Practitioner co-researchers | Patient co-researchers (n) | ||
---|---|---|---|
Trust | Hospital staff | Clinical area/numbers of staff | |
A | PE lead, QI facilitator | Emergency department (A i)/6 | 1 |
Medical ward (female) (A ii)/12 | 1 | ||
B | PE lead; lead for patient involvement in research | Surgical ward (male) (B i)/2 | 1 |
Community rehabilitation ward (B ii)/14 | 1 | ||
C | PE lead, PE manager | Surgical ward (male) (C i)/7 | 1 |
Two maternity wards working together (postnatal and antenatal) (C ii)/6 | 1 | ||
Advisory co-researchers | |||
Title | Areas of expertise | ||
Research programme PI | Quality and safety in health care – psychology | ||
Research programme manager | Quality and safety in health care – sociology | ||
Improvement specialists × 2 | QI in health care |
Action research cycles
The heuristic statements were tested and refined through cycles of AR that took place on the six individual wards. We framed these using the cyclical model of AR for organisational change128 that outlines four iterative stages: (1) construction of a shared understanding of an issue that people have come together to address, (2) planning actions appropriate to context, (3) implementing based on these plans and (4) evaluating whether or not intentions have gone to plan, with a view to revising the shared construction. Such cycles are informed by a broader theoretical understanding, which in the model is termed ‘context’. This means an understanding of why the issue is important, and what changes could potentially be made.
In our case, this understanding of ‘context’ is captured in our original heuristic statements, which we took into each ward. We then worked with individual teams to apply these to their clinical area and to plan, implement and evaluate successes. The ward-based AR cycles developed at different speeds and with different foci, so our conceptual understanding of the heuristic statements was influenced not only within but also iteratively between each area, with learning from one situation being brought to inform another. The cycles were split into two phases: between February and September 2017, we worked mainly with heuristic statements 1–5, and between September 2017 and March 2018, we worked mainly with statements 6–8. We used a concept that we called an ‘action research hub’ to maintain a sense of collective research across the six wards. Hub meetings took place at the beginning, middle and end, with the mid-point signifying a pause to collate and review learning so far, with a view to informing all ward cycles for the remaining months.
It is important to note our approach to ‘evaluation’ within these ward-based cycles, as AR is often critiqued for its failure to do this aspect well. 129 We focused on ‘evaluating’ the different heuristic statements that we derived from PET prototype 1 to assess if they made sense and to improve their articulation within a refined version. This took place using reflective methods outlined in Story-dialogue technique. An evaluation of how the PET as a whole enabled teams to work with PE feedback and how/whether this led to improvements was the subject of a separate evaluation study (see Chapter 6).
Story-dialogue technique
We used the story-dialogue technique130 to reflect on practice throughout the cycles. Story-dialogue technique is a form of evaluative questioning designed to generate new knowledge about what happens in practice and from these insights, decide on responsive actions. Box 2 provides an illustration of the nature of these questions, which we adapted for purpose.
Describe what happened from own point of view.
Why do you think it happened?Try to explain why it turned out as it did.
So what?Synthesise this experience into some new understanding of the context.
Now what?Decide what can be done to address what has been learnt.
Text adapted from Labonte et al. 130
Broadly, we used the technique in two main ways. First, it guided our own action researcher reflections where CM and RP kept reflective journals after each meeting or other significant encounter. 131 These journals played a dual purpose in informing the evaluation (see Chapter 6) as well as our AR. We therefore agreed with the evaluator a set of prompts for all entries that addressed the overlapping needs of the evaluation and those of the story-dialogue technique. In total, CM and RP made 119 journal entries.
Second, we periodically prepared an account of what we (CM and RP) collectively thought about one or more heuristic statements and used a variety of methods to share and explore our reflections with others. We made pragmatic decisions about what would work for different co-researchers at different times. These included written invitations, which received three responses, three focus group discussions at the mid-point hub meeting with a total of 19 participants, five group discussions back in the wards with a total of 12 participants, and two individual interviews. Within each of these settings, we provided each co-researcher with a written synopsis of our understanding of one or more statements of relevance to them and asked them to comment from their perspectives. We recorded and transcribed all those involving verbal responses.
Consultation exercises linked to the co-design
Running in parallel to the second half of the AR were co-design activities (in addition to initial workshops), as indicated on Figure 13. These comprised consultation exercises on two significant reiterations of the toolkit product, organised at each of the three participating trusts, which as many of the co-researchers as were able to come, attended. They also comprised development meetings at which the research team met with the design team to interpret and utilise the consultation feedback. Although these co-design activities focused primarily on the format of the product (how it looked and how it could be used), these discussions were open-ended enough to explore some of the theories behind our evolving heuristic statements. Therefore, it is important to acknowledge their contribution, albeit informal, to our developing ideas.
Analysis
In order to interpret how our heuristic statements changed, we used thematic analysis132 of reflective diaries and the transcripts of all story dialogues obtained through focus groups, interviews and group discussions. We also included the written responses. We did this at various times and depths throughout, thus allowing for regular input from our co-researchers. At all times, heuristic statements were used as ‘sensitising concepts’;133 they guided us as to areas of importance but were only starting points. This was particularly appropriate as our statements were situated within very embryonic conceptual areas. We used thematic analysis at the following times and depths:
Rapid thematic analysis at the mid-point
At the mid-point, a rapid thematic analysis was conducted to inform PET prototype 2 in which we scanned the reflections for key findings, shared and checked these at the mid-point hub through the focus groups, then summarised them for the design team.
Rapid thematic analysis towards the end point
Prior to the end-point hub, a further rapid thematic analysis was conducted in which we scanned the reflections for key findings and provided these to the design team who produced an (almost) final version of the toolkit that they could share and discuss with co-researchers at the final hub.
Full thematic analysis at the end point
When all activities with co-researchers were complete, a full thematic analysis was conducted of all documented data (the reflective journals along with all transcribed accounts of others’ reflections). The action researchers CM and RP first did this individually and then came together to discuss and create a synthesised account of what their heuristic statements had changed and developed into. This is documented in Our action research story and, as it is still considered as provisional, will be the basis of future discussion and development with co-researchers beyond the formal research programme.
Our action research story
What happened in each ward?
Although it is not our aim to provide an in-depth story of the journey of each ward using the PET, we do provide a snapshot of key activity in Table 6 to serve as a reference point from which to understand how and why our heuristic statements changed. The table summarises the main messages contained within patient feedback received for each ward and an overview of PE work that they subsequently undertook. More detail of what was achieved in each ward, along with implications for PE, is provided in Chapter 6.
Ward | Key messages in feedback | Overview of PE work |
---|---|---|
A (i) Emergency department | Staff caring and efficient; waiting time is a significant issue from reception to assessment (waiting for results, discharge or admittance). Interpersonal communications and accurate information about assessment and treatment appreciated. Staff and space under pressure with high numbers of patients at times | A nurse-led communication intervention to let all patients know the next two steps in their care with an accurate time estimate; leaflet explaining ED process and services; volunteer recruitment drive |
A (ii) Medical ward (female) | Staff caring and kind, but very busy so it is hard for patients to start a conversation or ask questions about their care as patients do not want to be a burden to staff. Loneliness and boredom expressed by some patients and relatives. Transfers during night experienced as disorienting. Some concerns about some night staff | Introduction of a new ‘patient experience round’: nurse and HCA-led communication initiative to make sure all patients and relatives can ask quality questions which are followed up every day; review of night ‘bank’ staff; seeking activity volunteers |
B (i) Surgical ward (male) | Staff caring and kind; boredom among patients; noise at night especially around nurses’ station; need for better communication with doctors for patients who want updates about their care and those who want to follow their established systems of self-management; medical patients (‘outliers’) have additional support needs which require time | Undertook a noise-at-night survey to identify causes, raised awareness of noise as a health issue among staff, replaced noisy door and bin closers, planning to further develop initiative on medical communications |
B (ii) Community rehabilitation ward | Staff attentive and kind; boredom and loneliness experienced by patients; worries about their future and some people have little or no family support; patients need repeated explanations about their care in language they can understand; uncomfortable chairs | Regular social lunch on ward; volunteer recruitment, and development of activity programme, chair-based exercise class; replaced uncomfortable chairs; gave garden a face-lift; applying for funding for further activities |
C (i) Surgical ward (male) | Care seen as friendly and brilliant, patients feel in ‘safe hands’; nurses attentive and translate communications from doctors to patients; some communications issues identified, especially for patients who do not feel able to ask. Noise issues identified | Tested an afternoon ward round by sister/nurse in charge to address communication needs of patients or relatives; poster to share patient feedback with wider staff team |
C (ii) Two maternity wards working together (postnatal and antenatal) | Staff supportive, approachable and responsive to patients’ needs and questions about the care of their babies. Range of issues about communication identified about the facilities and care provided by the ward team and how to get further help or support | Created a patient and relative welcome leaflet for both wards for all patients, which can be used by staff, volunteers, other patients and interpreters, about ward facilities and where to get support |
How and why our heuristic statements changed
In this section, we document how our original heuristic statements were revised as a result of AR. We mainly worked with statements 1–5 in the first phase and statements 6–8 in the second phase. Table 7 summarises these revisions and their articulation in the final version of the PET. Here, we combine the two so that we have seven final statements. The AR revealed the details about what they imply with respect to the steps and activities that are outlined in the final PET. Notably, we concluded that most activities require substantial facilitation and support for hospital teams at all levels. We highlight these requirements as we explain our revised statements.
Title | Revised heuristic statement/s | Articulation in final PET |
---|---|---|
1. Multidisciplinary approach | Nursing staff at the ward level can lead PE improvement work but may need help (and encouragement) accessing input from other disciplines, especially if PE does not already fit with existing multidisciplinary remits | The PET recommends the importance of nominating one or two ward-based leaders. The facilitator takes on a substantial role in supporting them to make connections as required |
2. PPI | Hospital teams enjoy the prospect of involving patient representatives and their most obvious role is to help address the thirst for credible feedback from patients, talking to patients and articulating PE to staff | Recommendation to always collect current feedback, guidance on supporting patient representative and tools |
3. Collecting current patient feedback | ||
4. Triangulation of different feedback | The notion of triangulation of different types is misleading. Instead, staff want credible, often qualitative ward-based feedback provided to them already analysed, which enables them to understand PE in holistic terms | Tools for data management included |
5. PE/QI team involvement | The ability of PE teams to support aspects of this intervention such as data management at the ward level, and volunteering recruitment should not be assumed to be available | PE team involvement advocated but facilitator is likely to have to provide support in this regard |
6. Facilitating group reflection and planning | Facilitation can support reflection, sense-making and help teams identify testable actions from PE feedback, which is often emotive and complex | Presentation of feedback to clinical team advocated involving patient representatives and central support teams to support teams in prioritising short and longer-term actions |
7. Applying improvement methods and developing skills | The IHI’s model for improvement can be used flexibly with PE feedback to guide clear problem definition and development of categories of actions. Within this, PDSA cycles can be usefully applied to actions requiring changes to team practices. At ward level, qualitative assessment of impact on PE is more appropriate than quantitative measurement | A categorisation of types of actions required is included to guide as an aid to the facilitator. Resources included to guide all stages of PDSA cycling where appropriate, including assessment of impact |
8. Celebration | Celebration should not be seen as an end of project activity, there are positives to acknowledge and celebrate throughout | Opportunities to recognise appreciative feedback and achievements are suggested throughout the PET |
A multidisciplinary approach: an ideal exposed
Originally, we all agreed that PE initiatives needed input from all types of health-care staff, but interestingly, all those attending the workshops and, therefore, those who became the leaders and core members of AR teams for the wards, were nurses. We learnt that by situating the PET at ward level this meant that nurses automatically took on the leadership role for it even though we had not specified this, because they ran and managed the wards. Staff from other disciplines tended to have a more transitory relationship with the wards. In every case, one or two leaders for the initiative self-nominated (ward leader plus one other in some cases) and committed to inviting others into the process as required.
In fact, we learnt that it was beyond our capability, in all cases except one, to invite others in from non-nursing disciplines to work with the ward leaders on PE. It was not that staff did not work with other disciplines regularly (they had daily ward rounds and other such meetings with doctors and allied health professionals), it was that the PE remit did not appear to be appropriate material for these forums. These existing forums focus on immediate patient care, so we sought out other places where multiple disciplines may be able to hear feedback, reflect more broadly and tackle issues together, but here we had no success. In B (i), C (i) and C (ii), concerns about communication with doctors were evident in their patient feedback, but, although efforts were made in B (i) to escalate these to the multidisciplinary care quality meeting with doctors, the forum did not meet regularly enough to enable this. In C (i) and C (ii), the nurses could not easily envisage any existing multidisciplinary team forum that would prioritise these concerns (focusing instead on issues deemed higher priorities such as patient safety or staffing cover). So, to bypass this problem, the nurses chose other issues to work on that they felt could be solved by them, rather than require a multidisciplinary team.
Only in B (ii) (elderly rehabilitation), in which there was an already thriving multidisciplinary approach to whole-patient care, were we able to truly engage other disciplines. In this case, their PE feedback advocated the need for more social activities, and this clearly supported their care objectives for rehabilitation, so they were able to progress this issue across disciplines (e.g. occupational therapy, physiotherapy and nursing).
Although success in engaging across disciplines was limited, we did have some more success in engaging with different levels within the nursing hierarchy, although the ways in which more junior staff engaged varied according to the ward leaders’ management style. Where the ward leader already supported regular team meetings [B (ii)] or training away-days [A (ii)], involvement opportunities were greatest. However, even in these cases, it appeared unusual for junior staff to think outside their defined roles to develop new areas of PE work (an issue that appeared to be at odds with the spirit of creativity and participation that we were trying to inspire), leaving many issues in the hands of the already overstretched ward leaders. In all other cases, few ward team meetings took place, so staff involvement beyond our meetings with ward leaders was extremely limited. As action researchers, we had to take a flexible approach, continually seeking to identify and develop connections across the ward teams and between departments or disciplines, treading a fine line between posing potential alternative structures, roles and activities and not being perceived as disruptive of a ward manager’s established systems.
In the main, we remained narrowly focused on our relationships with the ward leaders and realised how difficult it is to carve spaces for the collective working that had been envisaged in PET prototype 1. Tellingly, the exercises that had been included to support a multidisciplinary and team approach were left uncompleted in all our wards and, therefore, do not feature in the final PET, which advocates a more dispersed method of engagement led by a facilitator seeking out and developing connections whenever and wherever possible.
Patient and public involvement and collecting current feedback: combining two ideas successfully
We initially saw the two aspects of ‘PPI’ and ‘collecting current feedback’ as separate ambitions within the PET. In fact, there was a natural combination of these two ideas. At first, co-researchers had agreed that having patient representatives to work with staff teams was essential, but that their specific roles would develop in accordance to need. When the AR began, it became evident that there was a real thirst from staff for feedback from their current patients in a format that they did not already have. They wanted credible (informative and up-to-date) information that would help them to identify actual changes that could be made (see Triangulation of different feedback: a false promise for more on this). Patient representatives in four of the wards felt able (with support) to address this thirst and talk to patients and relatives on behalf of the staff to gather feedback.
Experimentation took place on A (ii), B (i), B (ii) and C (ii) to develop a model of patient representative interviews. In this, a prompt sheet of four open questions was used to very flexibly guide a conversation with a significant proportion of patients and relatives (approximately half of the patient population over approximately one week) about what was important to them with respect to their experience on the ward. Patient representatives were then involved in presenting key concerns back to staff teams. On wards A (ii), B (ii) and C (ii), the representatives also revisited the wards to gather follow-up feedback about the PE initiatives undertaken. Staff wanted this feedback, but patient representatives also benefited from ‘seeing the fruits of their labours’, and whether or not changes had been appreciated by patients. Where this model was not adopted [(C (i) and A (i)], the primary interest of the ward managers (at least at the start of the project) was on handling existing feedback more effectively, and this combined with the patient representatives’ interests in contributing in other, more advisory ways.
Where this model of patient representative involvement in data collection did take place, staff were extremely impressed with what it provided them with. They knew it revealed only a snapshot (not statistically representative) of PE but they valued it as a way of getting detailed, relevant feedback in a useable form. Staff also felt that patients were more likely to be open to a patient representative than they would be to a uniformed member of hospital staff. We observed that feedback was not necessarily revealing things that staff did not already know, but that hearing it directly from their patients gave them impetus to make changes. In B (ii), there had been many staff changes and, therefore, a period of instability prior to the project, and several remarked on how this had helped them refocus on patients’ needs. In A (ii), the ward manager explained that in busy, pressurised wards, it is possible to let standards of compassion and communication slip; periodically hearing the direct patient feedback serves as a reminder of the need to keep these standards high.
Success of the model was the result of a combination of staff desire for this type of feedback with a patient representative who was willing, skilled and with appropriate personal attributes to undertake this sensitive role. With both aspects in place, it would appear that the intended ‘representative’ role plays out and that they can draw attention to marginalised groups’ needs. In B (i) the representative was vocal about the needs of those patients who were seen as medical ‘outliers’ in a surgical ward. In C (ii), the representative helped staff to understand preferences of visitors to patients from the south-Asian community. In A (ii), the representative championed the needs of relatives or carers who could also be as elderly, vulnerable and lonely as the patients. As action researchers, we worked with trust volunteer teams to support these representatives appropriately and helped to nurture the supportive working relationship between patient representatives and staff throughout.
Triangulation of different feedback: a false promise
Led by the heuristic statement around triangulation, we set out to achieve a ‘combined’ analysis of different types of feedback to present to teams. There was a sense of anticipation at the outset that this step in the toolkit would allow teams to get access to all the feedback that exists about them and to view it in an accessible format. In fact, we learnt that this was highly problematic. We found that the NHS in-patient survey data did not contain any information about individual clinical areas, with the exception of the Maternity In-patient survey, which did contain broad themes, albeit quite out of date. Within FFT data, we found little use for the quantitative ratings because these were, in the main, consistently high, thus indicating no sense of trend or themes. We focused, therefore, on the qualitative comments and found that, where quantities were high [over 1000 per month in A (i)], these could be helpfully categorised into topic areas (e.g. communication, medication). However, these lacked detail about contexts and nuance, and they required staff to fill in the substantial gaps with their own knowledge; therefore, the patients’ own perspectives remained weak. Although we could at least derive topics within this high turnover area, the development of topics from existing data were impossible almost everywhere else, either because patient turnover was slow [B (ii)] or because there was a long gap between collection and the comments being available to senior ward staff. Feedback about a ward from other sources (complaints, PALS, social media) was either sporadic or specific to situations and times, so that combining with FFT into categories and themes was, at best, difficult, and at worst, misleading. Placing one-off events into categories gave the impression of generalised themes when this was not necessarily the case, causing unnecessary upset when presented to one ward team.
In four wards, patient representatives collected additional feedback from current patients and these interview responses were added to the mix of information available. Although all these types of feedback did need to be considered together, the concept of triangulation does not explain what we did to achieve this. Rather than cross-referencing different sources to arrive at an objective instruction, validated by its occurrence in multiple sources, we introduced an interpretive process for assessing the content of all the types of feedback that we had collated. This involved discussion with staff from PE teams, or senior ward staff who had a knowledge of typical issues and the frequency at which these arose for the ward. We then organised and prepared these into three or four topics for discussion that could be presented to the clinical teams. For each clinical area, this involved CM and RP, the patient representative who had collected the feedback and, in Trusts A and B, members of the PE teams. Although the ward leaders were invited to this, they did not have time to take part. We found that there was no capacity within any of our three trusts for PE teams to collate and organise multiple feedback sources ahead of these discussions. As action researchers, we brought research skills to this endeavour (collation, qualitative interpretation) and we saw little capacity to perform these roles among the trust teams.
Patient experience team involvement: limited resources, other priorities
In essence, this heuristic statement was about what can be achieved at the ward level if help is brought in by those who work centrally for the organisation. We had high hopes that PE teams would be able to help our clinical teams to access and interpret their feedback. With the drive of our co-researchers, PE teams did give us access to existing feedback for each ward if they had it. Although they held NHS in-patient survey data, FFT data, complaints and PALS, they did not routinely hold other types of data that we were interested in using (e.g. they did not subscribe to information feeds from social media sites such as Care Opinion), so we searched these out ourselves. The extent to which they were able to provide collated and easy-to-use data depended on their current PE data systems. In one trust these were quite advanced and in the other two they did not exist and we took on the role of creating user-friendly formats.
The PE teams and others (drafted in as required) did fulfil some important tasks for the ward teams. In Trusts A and B, PE teams helped interpret feedback ahead of sharing with staff teams. In Trust A, they also attended several ward meetings, with a QI specialist eventually taking over this role from the PE lead. This specialist help was hands-on; in A (i), he created the leaflet that the team required for their patients, taking it through the lengthy trust approval processes on their behalf. In both A (i) and A (ii), teams said that they would like the input of additional ward volunteers to help address some of the patients’ and carers’ emotional needs that had been evident in the feedback. They wanted volunteers who could chat or run activities. The QI specialist tried to help by enlisting the help of the voluntary service team. However, this was not achieved within the duration of the project, as there were not enough volunteers available and the recruitment process was lengthy. In Trust B, we were able to ask the Trust Volunteer Co-ordinator for similar help with the recruitment of volunteers for B (ii) and this was successful.
In all cases, these central support staff stressed caution in presuming that this level of help would be available beyond the confines of this discrete study. Although PE teams and QI staff were clearly interested (and passionate) about PE on the wards, they often stated that they had alternative organisational-level priorities related to governance (e.g. complaints management), or simply that they had too few staff to work at this level of intensity with multiple ward-based teams.
Reflection and planning: emotive and complex
This heuristic statement sought to articulate the need for a reflective process through which staff teams could engage with and make sense of their feedback. In understanding this process further, we learnt that it is the varied and complex nature of messages contained within PE feedback that warrants such a reflective approach. Often feedback does not contain a clear critique of what staff are or are not doing, or indeed what they should or should not do, but it contains an expression of patients’ and relatives’ emotional needs: loneliness [A (ii) and B (ii)], anxiety about the staff’s busyness [A (i), A (ii) and B (i)], anxiety about the future [B (ii)], anxiety about baby and learning baby skills [C (ii)], anxiety about relatives [A (ii) and B (ii)] and feeling vulnerable [B (i) and C (i)]. This type of content needed consideration alongside more practical concerns (e.g. noise or information provision). We recognised that some of the issues presented were deemed by ward staff as within their control to address, but some were considered outside their remit of influence.
We found that presenting this type of complex, emotion-rich feedback to staff was initially difficult. Conversations could feel defensive (‘that is not our fault’, or ‘we know all that but it’s the way things are’), could contain expressions of sadness (‘I just wish we could spend more time with them’), annoyance and frustration (‘there will be no money to help with that’). This process needed sensitive facilitation, recognising the emotional nature both of patient feedback and of staff responses to it. One ward manager admitted that she thought that it required great bravery on the part of teams to commit to hearing what their patients really felt. Dips in levels of energy within the meeting rooms were evident as staff took on board, either vocally or quietly, the implications of patients’ comments. Sufficient time was needed for staff to make sense of the comments and to suggest what could make things better. Teams then needed support in breaking down a variety of possible actions and we found it helpful to categorise these for them into quick wins (e.g. move the phone to reduce noise), ideas that needed a team approach (e.g. introduce social lunches), actions that needed to be escalated straight to the ward manager (e.g. addressing a problem with specific agency staff) or escalated to other departments within the hospital (e.g. requesting additional ward volunteers). We encouraged small but realistic steps within limited resources.
When teams were able to carve out some time and space, it was possible for such team sessions to become playful and creative, especially after the nervousness of the initial presentation of feedback was over. We found that many ward environments were not conducive to this process: there was little communal or meeting space or the only available space, usually the ward manager’s office, was small and continually in use. Some teams were more able than others to create time and space: A (ii) included reflective feedback meetings into their staff training days away from the ward, and B (ii) legitimised this activity within their existing multidisciplinary team meetings. In all other cases, we were only able to conduct this exercise with one or two senior staff.
Applying improvement methods: no textbook guide
Our co-researchers agreed at the outset that the Institute for Healthcare Improvements (IHI)’s model for improvement,123 including concepts such as PDSA, measurement and impact, could be usefully applied even if no one quite knew how. The IHI model’s principle of problem definition (having a clear knowledge of what issues are for patients and the resulting changes that teams would like to make) guided us as we facilitated all teams to understand and plan changes from their feedback, influencing our introduction of the distinction between categories of actions described above.
We found that PDSA cycles were then appropriate for the category that involved teams changing an established practice or introducing something new. These PDSA cycles enabled them to take a broad issue [e.g. patients need more social activities in B (ii)] and from this identify a simple change that they could make (daily communal lunch). We used PDSAs to start small and immediately (four patients, tomorrow), checking and addressing stumbling blocks (e.g. inappropriate table) before extending more widely. This approach worked well in A (ii) and B (ii) where changes were spread to include many members of staff and patients. In other cases, it revealed the impracticality of initial plans; in C (i), sisters did not have time for afternoon PE rounds with all patients and in C (ii), senior midwives realised that their staff would not automatically use a welcome leaflet in all admissions processes, and that this would require engagement and prompting. In these four wards, we deemed PDSA successful in slowing down the ever present desire that we observed within ward teams to ‘roll out’ changes quickly, and in ensuring that processes were reliable before doing so. We used some formal monitoring of their activities to support their PDSAs, using tally charts or staff reflection sheets to capture progress. These were useful when completed but they could not be relied on, as any additional paperwork was viewed negatively. Often the same information could be captured more successfully through verbal conversations, but these required additional ‘checking-in’ visits from us.
In B (i) and A (i), the PDSA approach was less feasible. In A (i) (which was an emergency department) the numbers of people involved (staff and patients) were so great, and the issue of waiting times was so intensely felt by staff, that incremental PDSA cycles were considered to be too slow an approach by the ward team, especially as they foresaw even greater pressures as winter approached. Instead of PDSA, established means of communication that reach large numbers of staff (e-mails, team briefs) were chosen to spread the new initiative before reliability was fully established, losing the opportunity to ensure effectiveness. In another ward, C (i), the ward manager stopped PDSA at the problem-definition stage because of staffing issues and a cancellation of her non-clinical time availability.
We found that, although we facilitated the formal PDSAs, other activities arose spontaneously. Actions that staff had labelled as a quick fix (e.g. ordering soft closing bins) happened at unpredictable times depending on organisational factors (e.g. money suddenly being made available) and related ideas were tried without prompting. This was particularly evident in B (ii) where what staff called a ‘cultural shift’ of focusing on social needs of patients led to two staff volunteering to lead exercise classes on their days off. Because of these emergent changes, the notion of measuring impact of any single change (in this case, lunches) was complex.
Throughout, we were aware that the terminology of measurement (before and after improvement) that had been brought in from co-researchers’ understandings of QI, did not quite reflect what we were doing. Originally, feedback had not provided a measure of a PE state to be improved, but rather had revealed a collective mood/feeling of current patients. The issues raised had at times not all lent themselves to improvement (i.e. remove the problem) but instead some form of response that often involved talking more to patients. We did conduct repeat exercises that were able to pick up on how patients perceived staff responses to their needs, or indeed if their needs had changed. In B (ii) and A (ii), where significant changes had been made, a follow-up exercise did reveal some positive feelings towards the staff initiatives put in place. However, in A (ii) particularly, there were also clear signs that this patient group still had the same needs, and that attending to PE was an ongoing and daily process; patients and their relatives remained elderly and anxious and staff remained busy. Ensuring due care with respect to communication was appreciated, but it was not a need that could be removed and ticked off.
Facilitation of QI was deemed by our improvement specialists as supporting ‘the art of the possible’ (indeed this terminology seems to reflect more the creative and adaptive approach that we developed throughout), using concepts of systematic QI in a particular way that used quantitative measurements where they were helpful [e.g. in B (ii) we kept a tally of how many patients went to communal lunch], but used qualitative interpretation to assess change (a follow-up review of topics of importance to patients).
Celebration: do not wait until the end
Our original intention for celebration put forward a fairly linear conception of how ward teams would receive feedback, make changes and assess impact, leading to the need to celebrate at the end to mark the progress made. As we did not know how many times they would go through this process, we anticipated that perhaps they would celebrate more than once. In fact, we learnt that celebration should not be conceived only as a ‘round-up’ activity but rather there was something to celebrate at many stages, and this ongoing approach to recognition and appreciation was good for morale for all involved. Without exception, when we collated PE feedback for each ward we were struck by the number of compliments and accounts of ‘exceptional’, ‘caring’, ‘competent’ and ‘excellent’ care, combined with a recognition that staff provide this level of attention while being extremely pressurised and busy. When feedback was shared with the team, these messages were conveyed first and we presented them with certificates that they could display in their wards. Many members of staff, particularly the ward managers, were visibly moved by this recognition (stopped in their tracks in fact) and it appeared to help them to relax and settle in to the process of reflecting on their feedback as a whole.
Throughout the AR, we found other opportunities to recognise and appreciate effort. In A (ii), the team of health-care assistants had embraced a new role for themselves within their ‘PE rounds’, so they received individual certificates. In B (ii), where the whole team had played their parts in improving social activities for their patients, we knew that they had a good success story that could be shared more widely. We helped them apply for a regional award to publicly acknowledge their efforts. Less formally, as facilitators, we also sought to acknowledge (taking pictures where possible) the small but important changes that they were making [e.g. B (ii)’s tablecloth and flowers for the social lunches]. This aspect linked well to the adaptive, supportive QI approach that we were providing as facilitators: searching out opportunities to maintain and improve engagement and interest. Interestingly, the notion of a final celebration in each ward did not take off; in some cases, the initiatives did not really reach an end point, but they wanted to keep going. In some cases, they were just too busy to eat the cake!
Discussion
This AR has enabled us to transparently critique the premises on which PET prototype 1 was originally conceived in order to improve its rigour. We did this by first capturing the collective thinking that shaped its components, using the notion of heuristic statements to outline our interpretation of what these were. Then, in situ with others, we conducted AR cycles with six clinical teams, reflecting throughout with our co-researchers on what these statements mean in practice, and how they can be developed and shaped into something that more closely reflects what works. In some cases, this involved adding more detail, and in others they were significantly changed. We identified three questions and an observation that arise from this work.
Multiple data sources: are they worth the effort?
The need to recognise the different types of information contained in PE feedback has been noted previously. Murrells et al. 54 distinguished between transactional and functional information (what happens to the patient and the ways in which it happens) and relational (how they are treated human to human). Entwistle et al. 38 advocates for more appropriate responses to handling and responding to ‘relational’ information than the reductionist approaches that currently dominate. She argues that procedure-driven, standardised approaches such as surveys (collection) and checklists (to guide response) are too narrow. She builds on this134 to explain why person-centred care and the needs of individuals are lost when PE is approached in this way. Our AR has revealed the thirst that ward teams have for all types of information. They want to get information from patients about things that can be quickly fixed, but they also want to understand how their patients feel so that they can develop more appropriate ways of relating to them, and they want to share this beyond the senior staff who currently receive it.
We learnt that all of these types of information can be found within feedback gained through a single route; our patient interviews provided sufficient functional, transactional and relational information within them to enable staff to make changes, potentially bypassing the need for ward teams to engage with other data sets (e.g. FFT), or to attempt aggregation and triangulation exercises with other available data. It is likely that other processes of direct engagement with current patients, similar to our patient representative interviews, could also elicit quality feedback. Where available, we did use existing, routinely collected feedback to supplement the interview findings. Where ward managers had good knowledge of its content, having kept informal track of its changes over time, they were able to bring this insight into discussions. More often however, routinely collected data were relegated in importance within these discussions because of their sporadic and patchy nature. Indeed, even for A (i) (the emergency department), where quantities of FFT and PALS were sufficient to develop topics for discussion from these data alone, the collation and organisation of this feedback was extremely time-consuming, and is likely to be prohibitively so outside a research project.
A ward-level intervention: how far can this go?
We discuss this question with reference to the Feedback Response Framework of Sheard et al. 15 This framework proposes that in order for changes to be made, there needs to be a local (ward-based) desire to respond to patient feedback, which is aligned with sufficient hierarchical structures of the right people with the right roles (structural legitimacy). There also needs to be sufficient organisational drive (readiness) to prioritise and enable changes to be made. Through AR we found that nursing staff at the ward level generally do have the desire to respond to patient feedback, but we have also learnt that if it comes in a helpful format, and with sensitive facilitation, it can be intensified to become an impetus for change. The PET facilitated this by ensuring that staff received credible, ward-level, current information about their patients and then great effort was made to generate responses together so that they owned and participated in them. It was clear that it is at the ward level that staff (particularly nurses) can know, care for and relate to patients; and these staff are central to making change. However, other disciplines (medical staff and allied health professionals) and other supporting staff (e.g. PE teams), all of whom have ‘homes’ not on the ward but in other parts of the organisation, must be aligned to support this impetus for change, something we found was not easily achieved.
As action researchers we were able to provide limited assistance in strengthening structural legitimacy. We enhanced the relationship between ward teams and the PE team in Trust A who were able to progress volunteer recruitment and leaflet development on behalf of the ward teams. This also happened in Trust B, with respect to volunteer recruitment. However, we could not create structures that were not there, such as the multidisciplinary team forums required to engage medical staff. Neither could we enhance organisational readiness, but we did note how this differed by trust. Trust A was the most willing to engage in testing new roles for PE and QI teams to assist in this work, as it clearly fitted their corporate agenda most closely. There is a role for facilitators working as we did, but they would need to have legitimacy (permission) themselves to fully help ward-based teams to navigate hierarchies. 89 It has been noted that such boundary-spanning work appears extremely difficult to achieve in everyday work,135,136 suggesting that organisations must fully support teams to work with the PET if they want them to be successful in their responses to the broad range of PE feedback that they are likely to receive.
Working with patient experience feedback: pushing the boundaries of quality improvement
At the outset, our collective assumption was that PE feedback could be used within a broad QI framework of diagnosis, change and impact assessment and our AR revealed three significant things in this regard. First, the nature of the feedback that patients provided, although containing other types of information, was highly relational. Significantly, anxiety, loneliness and uncertainty were recurrent themes (all issues that require a relational response) usually involving time to talk, listen and empathise. These are issues of the nature described by Cohn124 and Pflueger125 as not helpfully reduced to objective measures for use within improvement cycles.
Second, our approach to handling the complex stories provided by patients bore more resemblance to the concept of ‘soft intelligence’ processes (seeking and interpreting soft data)79 than to the use of metrics in improvement. In our approach, very open methods were used to talk to patients to obtain ‘untamed, unpredictable and spontaneous’ insights and these were used to stimulate sense-making, encouraging as many staff as possible to participate in a collective process that was challenging and, at times, discomfiting. Feedback was not so much used to instruct (to tell staff what needed to be changed), but to disrupt collectively held assumptions about what it was like on a ward and to make staff question their role, or potential roles, in making things better.
Third, we found that assessment of impact did not involve the revisiting of one specific metric. In fact, in an approach described by Øvretveit137 as a break from more traditional linear QI models, staff did not decide on only one specific change, but on a number of related actions. It suited our staff teams to do this because they had been presented with complex stories, which demanded an array of related responses (e.g. social lunches, volunteer recruitment), not only discrete issues that demand discrete responses (e.g. noisy nurses’ station). As well as this making it hard to decide what to measure for impact, we also found that the nature of some of the issues tackled was such that they never go away (e.g. patients may always have anxieties and staff will always be busy). Feedback serves to remind and reignite empathy (as soft intelligence) not to measure the impact of a specific change, but to periodically disrupt the complacency that can lead to poor practices. 79
Importantly, we did not however throw the QI ‘baby’ (any attempt to systematically track progress) out with the ‘bathwater’ (recognition that traditional measurement concepts are limited). We could effectively use PDSA as a systematic approach to learning and informed action, which as Reed and Card138 argue is one of the few QI tools that gets to the ‘crux of change’ process on the ground. The important skill is to balance the use of soft intelligence that is best left intact and not reduced to simple metrics, with the use of simple metrics that can be used to help understand progress along the way.
Beyond inanimate toolkits: can we find facilitators?
Across all steps of this PE intervention, we have found that progress relied on the support of a facilitator to lead, connect people, support patient representatives, organise reflective meetings, obtain and organise feedback, celebrate, facilitate priorities and support ongoing improvement work using appropriate techniques adapted to context. On top of these, there is the less tangible activity of bringing a positive sense of ‘the art of the possible’ to teams that are often bogged down and have low expectations of the changes that could be made.
As action researchers we were able to step into this facilitation role, but outside a research project it is not at all clear who would. We know that PE teams are overstretched,80 and that currently there are few QI staff within hospital trusts, as this is a relatively new professional grouping. 139 Our AR approach (rooted in participative and reflective techniques) has served us well in grounding the intervention within a theory of change akin to emerging participative approaches to QI. 140,141 With respect to QI for PE, it also shares participative grounding with EBCD. 30 What AR has unashamedly done for us is to place the person (i.e. the facilitators themselves) at the centre of the intervention so that it becomes more than a toolkit document, but a facilitator’s guide to supporting this work. The QI coaching models in health care142 may provide opportunities for articulating such a role outside a research context.
Conclusions
Through this study, as a team of co-researchers, we have explored how to effectively use PE feedback to make things better for patients at the ward level. We have shown that several key stages appear important for success, namely the collection of quality feedback (covering the range of information types), interpretation and team sense-making, followed by iterative, and at times systematic, changes and periodic review to revisit PE. The stages can be described as a mix between soft intelligence handling and QI, which we have shown can be used as complementary concepts.
We know from other studies that the relational aspects of PE are increasingly evident in patient and carer accounts of what matters most to them in hospital, but that they are also most at risk when health services come under increasing strain. 143 We have found that staff want to engage with feedback at this deep, and even emotional, level and are frustrated with data that are lacking in this regard. The development of the PET through this AR is a response to the need to therefore develop local PE feedback systems that can support staff to engage with these relational, as well as more functional and transactional, concerns. We have shown that, by using an appropriate method of collection and interpretation, all types of concerns can be found in one source, and that perhaps emphasis should shift from collection of multiple sources to creating suitable forums where the implications of feedback can be considered and developed into ward-based improvements.
Chapter 6 Mixed-methods process evaluation
Introduction
This chapter presents a process evaluation of the PET intervention. In the first phase of the project, an initial toolkit, PET prototype 1, was developed through co-design. It featured a facilitated process through which HCPs might be guided to work on PE feedback, including stages for setting up a multidisciplinary team, reflecting on patient feedback and making improvements using QI techniques. It was then implemented and refined through AR in the second phase of the project, which is evaluated here.
Toolkits are increasingly used in health-care research and QI as a mechanism of knowledge transfer. Often an output of a research project, they aim to educate potential knowledge-users about a given topic or to facilitate behaviour change. In health, toolkits have been developed for use in diverse topic areas and have been targeted at health professionals, patients and caregivers. Yet, despite an increase in their use, evidence of their effectiveness is patchy. 144,145 Yamada et al. 145 call for the rigorous evaluation of toolkits and a particular focus on their development, as it is here, that learning can be acquired about the factors underlying their effectiveness. A key issue is the role of context in shaping outcomes, as implementation strategies may have to be tailored to ensure the effective delivery of toolkits in different settings. 145
The process evaluation presented here specifically sought to explore the factors that shaped the use of the PET intervention across a range of settings. The evaluation ran alongside the AR project in order to overcome a potential weakness of that method that successful ‘action’ can come at the expense of rigorous research and generalisable findings. 146 Indeed, although the AR combined practical improvement work to implement and test PET prototype 1 with a research component to refine it based on the learning that had been acquired, a process evaluation was necessary to strengthen the research component of the study. A mixed-methods approach was applied to assess the impacts of the PET intervention and to develop understanding or ‘program theory’147 of its functioning. However, the use of AR to implement and refine PET prototype 1 posed several challenges for the evaluation.
The formative nature of AR meant that what the ‘intervention’ was and how ‘it’ should be implemented was unclear. In particular, an open question was the significance of the action researchers’ facilitation to the PET intervention. Although the process embedded within PET prototype 1 was always designed to be facilitated, the use of AR to implement and refine this process meant that this facilitation could be expansive and operate across organisational hierarchies and departments. This is significant because the PET intervention may have a built-in capacity to overcome barriers to the use of patient feedback, highlighted in wider literatures. 11,15,148 To our knowledge, this is the first study to utilise AR to support the delivery of a PE feedback intervention. Yet, while it was anticipated at the start that the action researchers’ facilitation would be important to the delivery of PET prototype 1, the AR project and the intervention were not reducible to each other. In particular, the research components of the AR geared towards refining PET prototype 1 would not have to be replicated if the intervention was implemented elsewhere. Hence, a challenge for the evaluation was to unpick the aspects of the AR that were central to the ward-level improvement work and that should, therefore, be considered part of the intervention. Research question 1 (see Research questions) encapsulates this challenge, while research question 2 orientates the evaluation to assessing the extent that the PET intervention can overcome barriers to the meaningful use of PE feedback.
Finally, the evaluation included a quantitative survey to further bolster the research component of the project. Although the qualitative component of the evaluation was independent of the AR, there is an obvious circularity in relying on the views of research participants to evaluate an AR project that they have been involved in as co-researchers. Hence, a longitudinal survey was carried out on the participating wards to provide a continuous measure of PE to assess how it changed over the course of the project.
Research questions
-
What were the main factors that enabled progress on the wards?
-
To what extent and how were barriers to the improvement work overcome?
-
Did PE improve on the wards, as assessed through a quantitative measure?
Methods
Type of process evaluation
The use of process evaluation methods has increased in recent years amid recognition that in-depth, theoretical understanding of interventions is required if their full potential is to be achieved. 149,150 The method is commonly used alongside RCTs to understand divergences in outcomes when interventions, already fully developed and standardised, are delivered across different sites. 151 This approach was not suitable here because PET prototype 1 was in its developmental stage, being implemented and refined through AR. Establishing prospective measures of fidelity, dose and reach, as is common in process evaluations to evaluate implementation processes,151 was therefore not possible. As such, the evaluation did not strictly adhere to existing process evaluation guidance but adopted a flexible, data-driven approach to theory development, incorporating aspects of program theory evaluation147 and abductive research theory. 152 The quantitative component, discussed in Quantitative data collection and analysis, deployed statistical process control (SPC) methods to analyse the survey data chronologically over the implementation period.
Constructing a logic model for the Patient Experience Toolkit intervention
The initial stage of the research involved constructing a logic model for the PET intervention, commonly used in process evaluations to represent the ‘program theory’ of interventions. 147 The evaluator (TM) immersed themselves in relevant literature, including improvement literatures, previous studies undertaken by the research team and the study protocol. An initial program theory for the intervention was subsequently formulated and represented in the logic model. This specified the intervention components, including resources and activities, which were expected to bring about effects, contextual enablers and barriers (termed ‘moderators’), which were expected to shape the project work and possible outcomes, ranging from proximal ‘mediators’ to more distal outcomes. An initial version was shared with the research team and the wider steering group and revised based on their feedback. This was then developed over the course of the study through an ‘iterative process’147 in which the model contents was tested and refined in relation to themes emerging in the data. The final logic model is presented in Logic model for the action research project.
Data collection
The methods of data collection were specifically tailored to account for the status of the intervention as being both implemented and refined through the AR. To track the intervention’s development, the action researchers kept detailed reflective diaries for each ward based on a prompt created by the evaluator. They made 119 diary entries in total. A log of major activities and project meetings was also kept for each ward by the action researchers. In addition, TM undertook participant observations of project meetings and kept detailed ‘pen portraits’ for each ward, used previously by the research team. 89 These included a story of each ward’s progress, summaries of the reflective diaries/participant observations and ‘analytic memos’,153 whereby TM reflected on emerging themes for later analysis. Qualitative interviews with key stakeholders at the halfway point and at the end point (n = 17) were also conducted. These were tailored to stakeholders’ roles in the project and they included patient representatives (n = 4), HCPs (n = 9) and members of PE teams (n = 4). The data were transcribed and fully anonymised. Wards were assigned number codes to ensure anonymity. These were 1, 2, 3, 4, 5 and 6. Staff members and patient representatives were assigned the same number as the ward they were attached to, while trusts were assigned a code of A, B and C.
Data analysis
A coding matrix was created in a Microsoft Excel® (Microsoft Corporation, Redmond, WA, USA) spreadsheet at the halfway point of the project to organise and summarise the qualitative data to aid its analysis, following the Framework Method. 153 This was then applied to the data for the remainder of the study. The matrix included columns for each ward and four rows for the logic model categories (resources, activities, moderators and outcomes), which served as descriptive themes for organising the data. Although this framework was deductively derived, it had the status of Miles and Huberman’s ‘start list’154 and new themes were incorporated when appropriate. The final coding framework included seven core themes: the project as a whole, intervention resources, activities/mechanisms, the process of implementation, people and relationships, moderators and outcomes. Each theme included sets of subthemes that were either informed by the initial logic model contents or had been identified in TM’s ‘analytic memos’ in the pen portraits. These themes could be descriptive or conceptual and often included contrasting participants’ views on a topic. There was also some overlap between themes. For example, the prominence of references to the people and relationships involved in the project led to a new theme being introduced, but these references could also be coded as a subtheme of mechanisms when a person or group were identified as contributing to outcomes.
The coding framework was analysed in order to refine the logic model. Where a successful outcome was identified on a ward in the outcomes column, factors that enabled this apparent success could be traced by examining other columns for that ward and vice versa with non-optimal outcomes. The content of the initial logic model was refined accordingly, going through eight iterations. However, towards the later stages of the project, TM questioned whether the logic model could accurately represent the intervention’s logic. Created for all wards combined, the logic model was observed to be failing to capture how the intervention was taking on a different form on the wards. Hence, in a final data theming stage, TM deployed the techniques of abductive research to interrogate the logic model and the coding framework. An abductive approach was deemed appropriate because it seeks to creatively theorise unexpected research findings that have emerged in empirical data. 152 The approach has been used previously by the research team, in the context of patient safety improvement projects. 15 Here, TM, in frequent discussions with LS, formulated various meta-level themes that sought to convey the core functioning of the intervention, addressing research questions 1 and 2 set out above in Research questions.
Quantitative data collection and analysis
A validated Picker PE survey50 was selected for the quantitative component of the evaluation and 12 questions were included in the final survey because of their relevance to hospital in-patients. The survey began in the week commencing 10 January 2017 to allow for a baseline of 8 weeks prior to the start of the project on the wards. Data collection ended in the week commencing 15 April 2018, 4 weeks after the end of the improvement work. A research team consisting of five people collected the data. Each member was assigned a ward and wards were mostly visited on the same day and time each week to minimise possible confounding variables related to the person and day of collection. Because SPC methods do not use traditional statistical significance tests but adjust to the available data set, the recruitment target was not based on a statistical power calculation, but rather what was feasible and sufficient to create a functioning SPC chart in order to analyse change over time. We therefore chose a weekly sample of 15–25 surveys. It was also anticipated that having 20 surveys would provide a reasonable number of responses across the six wards on any individual question/statement. However, because the ward environments are very different, recruitment rates were expected to differ, because of factors such as patient throughput and patient capacity to engage. Hence, although 20 surveys per week was the overall aim, researchers were asked to recruit up to six participants on each visit so that the wards with a higher recruitment rate would compensate for the wards with a lower recruitment rate. A total of 1028 participants were recruited over the course of the project, amounting to a weekly average of 15.5 participants. The main factors that prevented the target of 20 from being attained were very low patient throughput on some wards, meaning that on a visit it might not be possible to recruit any new patients, and mental capacity issues that prevented some patients from being recruited. Survey responses were therefore unevenly spread across the wards. Table 8 details the number of participants recruited per ward.
Ward | Recruitment (n) |
---|---|
Ward A | 271 |
Ward B | 278 |
Ward C | 167 |
Ward D | 53 |
Ward E | 121 |
Ward F | 138 |
Total | 1028 |
Once collected, the data were analysed using SPC charts. The SPC method is increasingly used in health care, often as part of an intervention to provide practitioners with real-time feedback. Here, it was separate from the intervention, used to assess whether or not PE improved on the wards during implementation. A core assumption of the SPC method is that variation exists in all performance data, the task being to identify variation that is out of the ordinary and, therefore, has a ‘special cause’. To that end, data are displayed chronologically in SPC charts, enabling various statistically derived decision rules to be applied visually. The charts feature three main lines: the mean (sometimes the median) and an upper and a lower control limit. The control limits set the statistical range within which data can be expected to fall under stable conditions. Data points outside these limits indicate special cause variation, as do unusual data patterns: these include trends of 6 consecutive data points (going up or down) or rows of 8 data points that are not consecutive but are situated above or below the mean. 155
A common use of the method is to see whether an intervention improves performance across a measure of interest. An instance of special cause variation can imply that the intervention has ‘worked’. If a trend or row is identified, a ‘step change’ is entered into the chart whereby the mean and the control limits are recalculated to reflect the change in the data. Performance will stabilise around this new level if the improvement is sustained. 156 We deploy this approach in Quantitative findings, presenting two SPC charts. The first of these presents, as initially intended, the survey data for all wards combined over the study period. The second uses the findings of the qualitative research to inform a secondary analysis of the survey data.
Qualitative findings
Four themes are relevant to research question 1: What were the main factors that enabled progress on the wards? These are ‘Not necessarily the toolkit document’, ‘People and relationships’, ‘The facilitation role’ and ‘Organisational support’. Two themes are relevant to research question 2: To what extent and how were barriers to the improvement work overcome? These are ‘The significance of staffing pressures’ and ‘Creative and pragmatic implementation’. This second theme includes three further subthemes: ‘Adapting to the ‘inner’ ward context’, ‘Creatively applying QI techniques’ and ‘The limits of escalation’.
Not necessarily the toolkit document
The PET prototype 1 took the form of a large, turquoise ring-binder. Inside this were an introductory page to the project and a one-page flow chart outlining the process designed for HCPs to go through to successfully act on patient feedback. Following this, each step of the process had a specific section that outlined how it would proceed. Because one aim of the AR was to refine this process, deviations from it were expected but nonetheless it was still anticipated that PET prototype 1 would be used by the ward teams. Each ward had a copy that was presented to the ward managers by the action researchers who explained to them that they would be working through the prototype over the course of the year. However, PET prototype 1 was rarely consulted in meetings from that point onwards. Only the one-page flow chart was used as a device for the action researchers to introduce newcomers to the project.
Interviews with the ward managers at the halfway point revealed a significant degree of scepticism about PET prototype 1 (ward managers 3, 4, 5 and 6). Some of this was directed at its form, with some ward managers viewing it as too large (ward managers 3, 5 and 6). Others highlighted stages that they thought were not required. A guiding principles exercise, designed to heighten awareness of PE among staff, received criticism (ward managers 3, 4, 5 and 6). One ward manager who had not been able to attend the co-design workshops saw the whole process as too time-consuming, stating:
This has clearly been worked out by somebody who has had time and time is something I do not have.
Ward 3, interviewee 1, ward manager 3
Ward manager 1 liked the initial prototype but, even then, it did not feature prominently in the meetings that they were involved in. They also recognised that other resources had been developed for use instead of it:
You can use the toolkit to think ‘right, how do we get from x to y’ and it helps you with that doesn’t it? Whether we’ve used it fully as they want us to use it is another matter. That’s what you’re going to ask me next isn’t it? [Laughs] It’s very possibly not happened but I think it has happened on other bits of paper.
Ward 1, interviewee 2, ward manager 1
Other ward managers similarly thought that PET prototype 1 had been superseded by resources developed by the action researchers, referring to patient feedback posters and flow charts as evidence of them using the toolkit (ward manager 4, 5 and 6). These resources had not superseded the toolkit but they had been developed as part of their wards’ PDSA cycles, reflecting the ward managers’ distance from PET prototype 1.
For the action researchers, PET prototype 1 was an occasional theme of their reflective diaries, with critical commentary on both its form and the process contained within it. Similarities could be noted between the action researcher and ward manager accounts, including concerns about its size and the guiding principles exercise. During the later stages, PET prototype 1 did become central to the AR as a document to be critiqued as the action researchers, the design team and the ward teams focused their attention on developing it based on the learning that they had acquired. Here, having a concrete toolkit document appears to have been useful for the research, if only in the sense that inadequacies could be identified to be avoided in later iterations.
People and relationships
The involvement of a diverse group of committed research participants was pivotal to the progress of the AR project. The study was located at a major research institute in the north of England, involving a dedicated and experienced research team and a steering group. The action researchers were academic researchers and often sought advice from the PI, the research programme manager and the two improvement specialists who sat on the steering group. In addition, before the project began and over the course of its first phase when the prototype was being developed, the action researchers established close relationships with the patient representatives, the ward teams and the corporate staff of participating trusts.
The research institute’s established PPI networks and links to the volunteering services of participating trusts meant that patient representatives were available and they could take on a vital role in the project. Some worked closely with the ward staff, collecting ‘live’ patient feedback on wards for discussion at meetings (patient representatives 1, 3, 4 and 6). Some brought with them knowledge of the role and had relevant work experience that meant they were particularly suited to be working closely with staff and patients on the wards. Likewise, the ward staff played a pivotal role and great care was taken by the action researchers to develop and maintain relationships with them. Not only was their involvement crucial for improvements to be made, but the action researchers viewed the trusting relationships built up with staff as crucial to enable often highly emotional patient feedback to be discussed. Similarly, efforts were made to develop relationships with corporate staff. The PE team of Trust A was not significantly involved in the project but did provide some support after the action researchers actively sought to develop a relationship with them. In Trusts B and C, relationships between the action researchers and local PE teams were strong throughout.
The participatory nature of AR meant that these relationships assumed even greater significance than they would have in a standard research project. Corporate staff, ward staff and patient representatives were engaged with as co-researchers and were recruited into the project as such. The action researchers also took the participatory ethos of AR very seriously, frequently discussing its implications for their practice. This is significant as it may have shaped how the staff engaged with the project.
Indeed, while staff engagement ebbed at times, it was generally high throughout and the trusting relationships forged between the action researchers and staff may have helped to maintain engagement through any rocky patches. Where staff engagement did ebb, this seemed to be correlated with staffing pressures. At different points of the project, ward managers referred to staffing pressures as a reason for planned meetings not going ahead (ward managers 2, 3, 4, 5 and 6) or needing to be put on hold for a period (ward managers 3, 5 and 6). Some members of corporate staff believed that the action researchers were too cautious and should have pushed the ward managers to prioritise the project during these times. However, this was a minority view. Others expressed the opinion that the action researchers’ approach, of respecting the wishes of ward staff when they wanted to pause participation, may have been beneficial in the long run. Certainly, ward managers interviewed at the halfway point appreciated how the action researchers engaged with them. Ward manager 2 stated that they liked developing something ‘straight out of the box’ rather than being told to implement something developed elsewhere. Ward manager 3 spoke of how they liked that problems identified in patient feedback were not just seen as errors that required to be acted on but could be interpreted and explained through dialogue. Ward manager 4 spoke of how it was empowering for staff, which was important as they were set to leave the ward:
It’s nice for the staff to be able to identify areas that they want to improve on because ultimately, and I have said this the whole way through, I’m not a permanent fixture here in terms of the ward manager and I will hopefully very soon be back to my original ward so they need to feel empowered to make changes themselves once I’m gone. So you know it has been nice to work collaboratively that way so that the staff feel that actually this is something that they can do themselves for the patients, they don’t need somebody telling them that this is what you need to do and it’s nice for them to be part of the decision-making process.
Ward 4, interviewee 6, ward manager 4
The facilitation role
Research participants overwhelmingly agreed that the action researchers’ facilitation was more important to the project work than PET prototype 1. One of the improvement specialists had fully anticipated this, expecting that the evaluation would produce evidence to back up their view that toolkit documents are insignificant relative to the facilitation role required to support them:
If I’m honest and being blunt I have always been cynical that a toolkit is what is needed. I don’t think the NHS needs any more toolkits and that is based on my experience of working with front-line teams over many years where no matter how good the quality of the content within a toolkit, I have never known any front-line team member or team use it and generally speaking it sits on a shelf. So I was never convinced that in order to help teams improve patient experience what we needed was a toolkit . . . You have got to facilitate it and support it in a collaborative way whatever it is you are trying to do.
Interviewee 10, improvement specialist
Many of the core facilitation tasks were not anticipated beforehand but emerged as the project unfolded. On finding that none of the wards had patient feedback available to them in a usable format, the action researchers themed existing data and/or made arrangements for more to be collected. One ward had sufficient survey data available (ward 5) while ‘live’ data were collected on the others (wards 1, 2, 3, 4 and 6). It was mostly the patient representatives who collected the ‘live’ data but the action researchers collected them on ward 2 when the patient representative could not attend. This was then organised into topics by the action researchers, patient representatives and some corporate staff in what was referred to as a ‘collective, interpretative process’. The action researchers facilitated these sessions and subsequently created patient feedback handouts for each ward based on the discussions. Once presented to ward teams, the action researchers then guided them through PDSA cycles to prioritise issues and implement and evaluate changes. It was anticipated at the start that knowledge of QI methods would be crucial here and both action researchers attended courses to increase their knowledge of the field. In addition, outputs of the PDSA cycles, such as posters and leaflets, had to be designed for use on the wards, requiring additional skills on behalf of the action researchers. It was observed that front-line HCPs did not have the time or the skills to undertake these tasks.
An additional, emergent facilitation activity was the more intangible effort to encourage ward staff to believe that they could make changes despite the pressure that they were under. Strategies included, for example, focusing on ‘quick wins’ first to show what could be done or celebrating existing examples of good PE. The action researchers referred to this as encouraging a positive sense of the ‘art of the possible’. Finally, the facilitation sometimes involved escalating issues when they required the input of other hospital departments. Over the course of the project, the action researchers contacted a range of actors as part of the ward action plans, including PE teams and volunteering services. Escalation included procuring door stoppers and bin silencers to address noise at night (ward 3), securing volunteers for wards (wards 4, 5 and 6) and clearing project outputs through relevant trust authorities (wards 1 and 6). Newly established connections could also be drawn on when problems arose. Some ‘escalation’ activities were also carried out by PE teams, although ward manager 3 was adept in this regard, contacting their information technology (IT) and estates department as part of the project work.
The ward staff expressed gratitude for the facilitation provided by the action researchers. The patient feedback handout was a popular item. Some ward managers had shown it to their staff, reporting that it had helped boost staff morale (ward managers 5 and 6). Others had used it to escalate issues (ward manager 3). In one ward (ward 4) that had gone through a period of uncertainty at the beginning of the project because of a change of leadership and some organisational changes, the project was seen by the ward manager and a qualified member of staff as having been empowering for staff and as having improved ward culture and team working. The patient representative for the ward observed a new sense of enthusiasm on the ward. In another ward (ward 1) where the project work had required a change to staff routines, the action researchers had provided a ward manager with assistance over several months to help them ‘roll out’ the change. The ward manager reported that this had improved their understanding of QI and even their management skills. Similarly, three ward managers saw the new connections formed through escalation as a positive outcome of the project (ward managers 4, 5 and 6). While these connections made ward-level improvements possible, they also opened opportunities for further improvement work in the future:
By being part of this project it’s opened more doors so if we needed more support or we needed something else implementing we have got that. There is an individual for each area where you can tap into a resource and just say ‘look, it isn’t working, we need this, this and this’. Like XXXX [local PE team member] is wanting to look at different things. He has come into XXXX to observe and bring some fresh ideas.
Ward 5, interviewee 3, ward manager 5
A possible issue with this expansive facilitation role is that ward teams may have left tasks for the action researchers, as they preferred to do as little as possible. A related concern is that the action researchers’ extensive facilitation may have minimised staff’s exposure to the intervention, blunting its potential effects. A patient representative shared this concern, warning that the significant changes that had occurred on their ward could be undone once the action researchers withdrew, given their prominence in the project. Yet, without the facilitation provided as part of the PET intervention, it is questionable whether or not improvements would have occurred at all, particularly given the extent of staffing pressures on the wards (see The significance of staff pressures). The action researchers were also wary of doing too much for staff and could be seen sometimes pushing back at their requests if they seemed unreasonable or if staff could do tasks themselves. This issue should, therefore, be seen as a tension rather than as a fundamental weakness of an expansive facilitation role.
Organisational support
The project work received significant organisational support from the three participating trusts, although the level and nature of the support differed between them. Each trust had a PE team that provided most of the support, although in Trust B the PE Team was spread over research and risk departments. Trust C integrated the project into their own work on PE. Other actors who provided some input to the project were Heads of Nursing, matrons and personnel from volunteering services, medical illustration, information technology, estates and communications.
Organisational support took a number of forms, ranging from making patient feedback available to direct participation in project meetings. The attendance of a PE lead at ward meetings was critical to the project’s advance on ward 6, as the ward manager expressed initial reluctance to get involved. The fact that a prominent member of corporate staff attended the meetings may have served as a signal of Trust C’s commitment to the project and opened up new opportunities for collaboration, which was appreciated by the ward manager. The PE teams also had a significant role to play when the project encountered difficulties related to staff pressures. The most notable example here occurred on ward 5, when all project meetings had to be cancelled because the non-clinical time of staff was pulled centrally by Trust C. With the action researchers having to withdraw from the field, they reached agreement with the local PE team to continue supporting the ward’s improvement work. The local PE team produced a patient information leaflet and took it through a lengthy clearance procedure involving the trust’s communications and medical illustration departments.
The involvement of PE teams meant that organisational learning emerged as a main outcome of the project. The two PE teams which were heavily involved spoke of how they learnt about the support that is required for front-line staff to successfully work on PE feedback. On seeing the potential of having patient representatives to collect ‘live’ feedback, one of the PE teams planned to replicate their role across Trust B. The new connections forged between the PE teams and the wards also make future collaboration possible. A PE team member from Trust C welcomed this, emphasising positive impacts of the project beyond the immediate use of the PET:
This came about as a direct result of Rose and Claire’s involvement in XXXX. Their involvement kick-started how XXXX were looking at themselves and how they could improve the service to patients. And now that I have got a relationship with XXXX the matron we are beginning to look at other things we can do. So although the toolkit might not be at the forefront of our thinking, all these other things wouldn’t have happened without their involvement. I don’t know how you capture that other than to say it sparked this interest and wider thoughts about what could be done.
Trust C, interviewee 15, PE team member
The significance of staff pressures
A common view among research participants was that the overarching context of resource constraints and pressures on staff limited the full potential of the project. The action researchers and the patient representatives were frequently concerned about the well-being of staff and efforts were made to ameliorate the worst effects of the pressures (see Creative and pragmatic implementation). However, despite the resourcefulness of the action researchers and ward staff in working around the pressures, full implementation of the PET could not always be achieved.
Indeed, although establishing prospective measures of fidelity, dose and reach was not possible because the PET was not fully developed at the start of the study (see Type of process evaluation), it is possible to retrospectively assess how each ward did, now that the process embedded within the PET is fully developed. The final PET iteration contains six stages, including stages for forming a ward team, gathering data, reflecting on it, prioritising issues, making changes and measuring impact. All wards got to the stage of making changes but only three wards completed the final stage of measuring impact: wards 1, 4 and 6. This does not imply higher or lower engagement with the intervention, as each ward context presented different opportunities and challenges. It is notable that these wards had less acute staffing pressures, reflected in fewer and shorter pauses in their participation in the project: ward 6 paused participation for 4 weeks over the summer period because of shortages of qualified staff but returned to full participation shortly after, whereas wards 1 and 4 fully participated throughout. By contrast, the three wards that did not complete the PET had significant pauses lasting longer than 3 months because of staffing pressures (wards 3 and 5) and a combination of staffing pressures and a change of leadership (ward 2). In addition, the wards that fully implemented the PET mostly did better at forming a core ward team consisting of a large group of staff. Ward 4 already had multidisciplinary team meetings that the action researchers were invited to attend and the ward manager of ward 6 incorporated the project into their ‘away days’, thereby involving a considerable number of nursing staff. Ward 1 is the exception among this group, as the project was led by two ward managers but they worked together as a team. By contrast, ward 5 had severe staffing pressures that prevented large numbers of staff from being involved, whereas the ward managers of ward 2 and 3 mostly worked alone. Their desire to retain sole ownership of the project may be linked to staffing pressures, as they did not want to add more work onto already intense staff workloads. Ward manager 3 was apologetic about this, recognising that it may have prevented the project from achieving its full potential:
I’m worried about my role in the project because I have felt like I can’t give it the commitment that it probably needs. I certainly haven’t been able to get a team of people together, not that they are not interested but it is the time commitment to get to the meetings.
Ward 3, interviewee 1, ward manager 3
This failure to form a multidisciplinary team on some wards complicated PET’s implementation. On ward 2, the improvement work effectively ended when the ward manager left for another position because no one was available to take over from them.
Creative and pragmatic implementation
A defining characteristic of the intervention was how key components of it could be adapted, via the action researchers’ facilitation, to some of the challenges and opportunities (i.e. ‘moderators’) present in the different ward settings. Progress using the PET (if not always full implementation) hinged on its creative and pragmatic implementation.
Adapting to the ‘inner’ ward context
The action researchers could be seen responding to and sometimes modifying micro-level factors present in the ‘inner’ context157 of the wards. The contributions of each ward team member depended on a range of factors, such as how they chose to engage, the time they had to engage and their skills and capacity to engage. The action researchers were highly adaptive in this regard, sometimes doing tasks for one ward team that had been carried out by ward managers or patient representatives on other wards. In addition, the action researchers could be seen taking careful steps to ensure that the right person would lead on the improvement work. Sometimes they could get around issues regarding ward managers’ reluctance to delegate or when some staff members exerted an unhelpful influence in ward meetings, by recruiting other members of staff who expressed an interest into the project. The effects of staffing pressures on some wards were ameliorated by the action researchers holding ‘pop-ins’ rather than formal meetings, while greater emphasis was placed on the celebration of good practices for those wards lacking in self-belief and efficacy. In addition, issue prioritisation provided a further opportunity for the action researchers to influence proceedings. Ward teams that were suspected of selecting easy issues could be encouraged to take on something more challenging, whereas those that were overly ambitious and, therefore, in danger of being disappointed because they had taken on too much, could be slowed down and encouraged to address ‘quick wins’. Selecting an issue that resonated with staff and that did not require significant resources appears to have been a factor in the success of ward 4’s project:
It helps that the staff have been keen. The occupational therapist XXXX has been keen to get people involved. They see those sorts of interactions as beneficial to rehabilitation . . . The social lunches felt like an easy win for us initially. It didn’t require much in terms of resources. A lot of it has been time and changing the way that staff do the lunches. That’s been the biggest challenge for the staff and on some occasions they haven’t been able to do it, which is fair enough. That’s a point around, when you first come into a ward, pick the easy wins and push the bigger issues further back.
Ward 4, interviewee 8, patient representative 4
The action researcher’s flexible and pragmatic approach at ward level was therefore central to the project’s progress, yet there were limits to what it could achieve at that level. Pop-in meetings may have ameliorated some of the effects of staffing pressures but the project still had to be paused for significant periods on three of the wards because of them (see The significance of staff pressures). Furthermore, while the action researchers had some flexibility in ensuring that the right person led on the improvement work, staffing pressures and the absence of multidisciplinary team structures on most wards (wards 1, 2, 3 and 5) limited the extent to which they could recruit new members of staff.
Creatively applying quality improvement techniques
Quality improvement also had to be adapted to each ward project, depending on the issues selected, the QI skills of staff and the extent of staffing pressures. The PDSA framework, which emphasises the importance of starting small and expanding through piecemeal tests, was fruitfully applied by the action researchers to guide the issue prioritisation process and the development of changes. Yet, incorporating measurement into the PDSA cycles was a challenge. Measurement was used on some of the wards (wards 4 and 6) to record how often a change to staff’s daily routine was delivered, in order to test for feasibility. This prompted some rethinking on ward 6 regarding who would ‘roll out’ the change, with health-care assistants taking on a more prominent role when it became apparent that qualified staff struggled because of staffing pressures. However, staff did not always keep a record of what they were doing when asked to and some did not seem to grasp why it was important. Staff on ward 5 struggled to keep the record because it added too much to their workloads, even though they reported delivering the change.
A further challenge was to accurately measure ‘impact’. The FFT data were analysed before and after a change for ward 5 but, while a positive change was detected, the action researchers considered this to be an overly crude measure of outcome. Ward 4 successfully demonstrated the impact of a change through a combination of process data implying that the change had been implemented and ‘live’ outcome data, collected by the ward’s patient representative before and after the change. Yet this approach was not as successful on wards 1 and 6, because it was difficult for the patient representatives to capture relevant data from patients. On ward 1, a new patient information leaflet had to be brought to the patients’ attention first in order to ask their opinion of it, leaving open the questions of whether or not staff are using it or whether or not it improves PE outcomes when used. On ward 6, improved PE outcomes were detected but it was unclear what had caused the improvement. The patient representative also noted that staff pressures meant that it was unlikely that the issue had been fully addressed:
I think that it had improved but it was still a bit difficult to pin down. The overall impression I got was that it had improved but not in the direction they were expecting [laughs] . . . Whether it fully achieved what was intended I’m not sure because one of the ongoing issues was of course staffing and you can only do so much with what you have got.
Ward 6, interviewee 17, patient representative 6
Although this may fall short of the standards of scientific versions of QI, the practice of having patient representatives talk to patients was widely viewed as having benefits beyond demonstrating ‘impact’. These include enhancing the communication flow between patients and staff and the quality of the patient feedback available at ward level.
The limits to escalation
Escalating issues to get buy-in from other departments was a further way to get around challenges to the improvement work. The escalation activities are discussed above (see The facilitation role), including examples of procuring resources for the ward projects and utilising the connections with trust departments when problems arose. Although escalation had successes, it also had limits. Most of the success came where trust departments, particularly PE teams, were already collaborating with the action researchers. In addition, the severity of some issues, notably staffing pressures, meant that escalation did not always succeed even when local PE teams were heavily involved. For example, despite the pivotal role that the local PE team played in progressing ward 5’s project by producing a PE information leaflet, full implementation of the PET could not be achieved because all non-clinical time of staff had been pulled. On other occasions, the action researchers’ concern for staff well-being led them to unsuccessfully escalate to see whether more staff could be assigned to the wards or to ask for support as part of their ‘duty of care’ as researchers. An entry into their reflective diary shows the level of concern that one of the action researchers had following a meeting with staff, prompting them to question the very basis of the project under the current circumstances and to reflect on how they might escalate the issue:
They kept letting off steam about really emotional and high intensity situations they are finding themselves in as staff at the moment – not being able to meet all patients’ needs (some of which are urgent and life/death situations). They described the strain on staff and the fact that support mechanisms were currently being put in place for staff to cope. I still feel quite emotionally affected even writing about this. It had a profound effect on me . . . I cannot help thinking that the urgency of the situation needs discussing and not dismissing. Is a PE project appropriate when there are such pressing demands on the service? Is it that a PE project is needed even more, or do we all just need to lobby to try and raise the profile of the desperate situation? I am undecided.
Action researcher’s reflective diary
Logic model for the action research project
Figure 14 presents the final logic model, developed as the project progressed. The logic model serves as a complement to the theoretical assumptions explored in Chapter 5 and the final PET iteration. The key intervention resources are identified to the left of the diagram. The PET is included alongside the skills, leadership and motivation of ward teams, including ward staff and patient representatives, who are central to PET’s implementation. The wider supporting mechanisms are also identified, including the facilitation role and organisational support. More detail of what the facilitation entails is provided in the intervention activities section, along with a list of additional intervention mechanisms, notably participation, reflection, feedback and action planning/QI cycles. These are mechanisms that are recognised as affecting change in wider improvement literatures. The detail as to how they manifest through the PET and how they can be adapted to PE are found in Chapter 5. Contextual factors that shaped the intervention, whether enabling it or serving as a barrier, are listed in the moderators section of Figure 14. These have also been distinguished between those that were external to the ward settings and those that were internal. To the right of the diagram are the outcomes of the intervention, distinguished between proximal ‘mediators’ and more distal outcomes. While the ideal, distal outcome of a fully embedded PE system (with sustained improvements to PE occurring) was not achieved on any ward during the year we worked with them, the various proximal outcomes listed in the logic model could be identified across the participating wards. Furthermore, the list of moderators included in the model provides insight into what would have to be in place for the PET to achieve such an ideal.
Finally, the methodological implications of the project are listed at the bottom left of Figure 14. The evaluation lends weight to the action researchers’ conclusions regarding the benefits of AR’s participatory approach and the centrality it assigns to the facilitator role within the intervention. The need to creatively apply QI to PE was also a core finding of the AR that the evaluation again lends weight to (see Chapter 5).
Quantitative findings
The two SPC charts presented below can be analysed to address research question 3: ‘Did patient experience improve on the wards, as assessed through a quantitative measure?’ The type of chart we use is a ‘P’ chart, specifically designed to analyse classification data (each survey response was coded as either identifying a problem or not identifying a problem). 155 Both of the charts display the survey data for all questions combined. Although the Excel database used to create the charts was set up to analyse questions individually and by groups of questions if required, tests of correlations between question responses (including Pearson and factor analysis) revealed that only when analysed together as a single PE measure was the internal reliability of the questionnaire high. Thus, both charts show the percentage of negative responses across all survey questions. For the first chart, the unit of analysis is all wards combined, because it was anticipated that not enough data would be available to carry out a detailed, ward-level analysis. The second chart is a post hoc SPC chart, created at the end of the implementation period to reflect change in PE for those wards who fully implemented the PET.
The step changes that can be identified in Figure 15 indicate two instances of special cause variation. The initial mean (based on the 8 week baseline data) was 21% negative responses. A trend of 8 data points below this occurs after the week commencing 22 May, prompting input of the first step change. At 18%, the new mean implies a 3 percentage point improvement in PE over the period. However, a trend of 8 data points follows the week commencing 20 November, necessitating input of the second step change. The new mean for the subsequent period is 24% negative responses, implying a 6 percentage point decline in PE. Hence, PE did not improve on the wards on this quantitative measure. The initial mean is 21% negative responses and the end mean is 24%, implying a 3 percentage point decline over the course of the AR project.
Several caveats are necessary when interpreting Figure 15. A key issue when using SPC charts is how to attribute causation between interventions and outcomes. 158 Ideally, comparison or control groups are used to show what might have occurred in the absence of an intervention, yet this was not available here. An alternative option is to annotate SPC charts with the point at which the intervention is introduced and an anticipated time lag, calculated prospectively, for the effects to ‘kick in’. 158 Causal claims can then be made, based on the extent that the data map onto the annotated information. However, the complexity of the PET made it difficult to annotate the chart in this way. An anticipated time lag could not be established prospectively as the intervention was not fully developed beforehand and it was unclear what stage of PET would have most impact. Although it is likely that the PDSA cycles would have had the most effect on PE, other stages within PET, such as the reflexive sessions with staff, cannot be discounted. In addition, the wards progressed through the stages at different times and encountered different issues, making it difficult to annotate each ward’s journey on a chart for all wards combined.
Although it is possible to conclude that PE declined over the study period, it is not possible to draw conclusions about what has caused this. Possible explanations include the AR project itself or staffing and service pressures, these being a prominent theme in the qualitative data. The improvement over the summer period, that is evident in the graph, could indicate a lifting of winter pressures, which returned in November. Furthermore, although it is possible that the AR project contributed to the decline, perhaps by diverting resources away from front-line care, the decline may have been more profound in its absence.
A further weakness of Figure 15 is that it includes all wards together when wards had varying degrees of success in implementing the PET. Some had to pause participation because of the pressures. Compounding this issue is the fact that varying amounts of survey data were collected from the wards. It is possible that the wards that successfully implemented the PET have less weight in the chart than the wards that struggled, although the opposite is also possible. To get around this issue, a second chart was created for those wards that fully implemented the PET: wards 1, 4 and 6 (see 3.1e, above). These wards were identified in the qualitative research as having completed all stages, including the final stage of impact measurement. The ‘live’ data collected for this showed improved PE when compared with ‘live’ data collected before it, although some difficulties were encountered measuring outcomes (3.1f, above). This apparent success makes it interesting to see what happens in the survey data for these wards.
In contrast to the main SPC chart, Figure 16 shows one instance of special cause variation, which implies an improvement in PE for wards 1, 4 and 6 over the study period. The initial mean is 25% negative responses and a step change at the week commencing 24 July sees this decline to 21%, amounting to a 4 percentage point improvement in PE. Although there are two data points significantly above the upper control limits in November and March, there are no trends or rows and the final data point for week commencing 2 April is significantly below the lower control limit, as it was in Figure 15. Hence, PE did improve over the study period on the wards that fully completed the PET on this quantitative measure and this improvement was maintained until the end of the project.
A possible interpretation of Figure 16 is that the project succeeded in improving PE on wards 1, 4 and 6. Yet, the difficulties involved in attributing causality between intervention and outcomes also arise here. The apparent relationship between the PET’s implementation and improved PE may be correlated with other factors causing the improvement. For example, it could be that the conditions on wards 1, 4 and 6 were more conducive to the PET’s use. As noted in 3.1e, staffing and service pressures appeared to be least acute on these wards, whereas wards 2, 3 and 5 had significant pauses lasting longer than 3 months. It might be that staffing pressures eased on wards 1, 4 and 6, whereas they increased on wards 2, 3 and 5, accounting for the improvement in Figure 16 and the decline in Figure 15. Yet, it is also possible that the relative absence of pressures on wards 1, 4 and 6 provided receptive conditions for the PET’s implementation, enabling it to achieve its potential, whereas it faltered on the wards where the underlying conditions were not receptive to it. This would suggest that the PET is indeed a robust way of working with PE data when staff have sufficient time and resources to implement it.
Discussion
An increase in the use of toolkits in recent years has led to calls for their robust evaluation. 144,145 This evaluation highlights the significance of the people, the relationships and the organisational context that shape the creation of toolkits, warning against abstracting these away from the final product. The actual toolkit documentation of the PET prototype 1 was found to be insignificant as a mechanism of the improvement work. More important was the facilitation provided by the action researchers, the input of ward staff and patient representatives and the organisational support the action researchers received. Hence, ‘When is a Toolkit a Toolkit?’ appears to us an important question to ask about toolkits. Toolkits that aim to tackle a simple or ‘complicated’ problem159 may work as a standalone document for staff to pick up and use. But toolkits for complex problems are likely to require additional supporting mechanisms, thus forming one part of a more complex and adaptive intervention. 160,161
Penny Hawe uses the metaphor of ‘event in a system’ to highlight the adaptive and emergent nature of complex interventions and the broader criteria that are required to evaluate them. Complex interventions, as events in systems, seek to improve system functioning. Pre-existing contextual factors shape the form that they take on and success requires that they strengthen or harness these factors. 160 The metaphor is useful to understand the complexity of the PET ‘intervention’ (a combination of the toolkit document and a flexible facilitation function) and how it seeks to guide improvements to PE at ward level but also to strengthen system capacity in making such improvements.
The expansive facilitation role permitted in the AR meant that it was possible for the action researchers to respond to many of the barriers to working with PE feedback. 11,15,148 Context-sensitive facilitation strategies were enacted in response to the distinct challenges and opportunities existing on the wards, including flexibility regarding the roles and responsibilities of ward team members and how QI techniques were applied. Issues could also be escalated to get buy-in from other actors and departments when required. This potential of facilitation to contribute to the delivery of complex interventions, either by adapting interventions to context or by enhancing the receptiveness of context to interventions, is recognised in wider improvement literatures. 162,163 Harvey and Anderson note that macro-level contextual factors pertaining to health systems at large are difficult to address but they recognise the potential of facilitators to modify ‘inner’ contextual factors, such as staff engagement or local leadership. 162 Yet, while the action researchers were able to overcome micro-level barriers at ward level and improve trust-level processes in some instances, there were also limitations to what they could achieve.
Indeed, escalating issues was easier and more successful when organisational support for the project was already in place, particularly in relation to the support of local PE teams. Pre-existing multidisciplinary team structures or extensive team working also made it easier to implement the PET, and problems were encountered in their absence. Although they had some flexibility in recruiting team members into the project, the action researchers could not always sufficiently expand the ward teams when ward managers were reluctant to involve other staff. Similarly, the effects of staffing pressures, by far the most significant barrier to the improvement work, could be only partly ameliorated. Strategies were developed by the action researchers to get around the pressures, including ‘pop-in meetings’ and in an extreme case having the local PE team support the improvement work, after project meetings had to be cancelled as a result of staff having all non-clinical time pulled. Despite these efforts, it was not always possible to proceed and only three of the six wards (wards 1, 4 and 6) were able to complete the PET in full. This would imply that having a receptive context, in which staff have sufficient time and resources to dedicate to making improvements, is critical to the PET’s success. The survey findings appear to confirm this, although there are limitations to the analysis (see Weaknesses). While the main chart for all wards shows a decline in PE, the chart for wards 1, 4 and 6 shows an improvement. The relative absence of staff and service pressures on these wards may have eased the process of implementing the PET, enabling it to achieve its potential.
The limitations to what the action researchers’ facilitation could achieve in the current study is relevant when assessing the PET intervention’s feasibility for use in other ward settings. An implication is that wards and even organisations may have to be pre-selected to ensure receptive contexts for the PET’s delivery. Indeed, the greatest success in terms of improved PE is likely to be achieved when staff are highly engaged and multidisciplinary team structures already exist, extensive organisational support is in place and staff and service pressures are at a minimum. Pre-selecting environments in this way would free up the time of future facilitators of the PET to concentrate on guiding ward teams through improvement cycles rather than have to navigate and modify barriers to the PET’s delivery.
Weaknesses
A weakness of the evaluation concerns the stage of the PET’s development. The final iteration has changed significantly from PET prototype 1, now being targeted at potential facilitators rather than HCPs. This may enhance its potential utility but testing is required among this group and the question of how to achieve success at scale requires further consideration. At least some of the action researchers’ learning regarding how to implement the PET will be tacit and, therefore, difficult to capture and transfer across people and settings.
In addition, the AR project took place in the English NHS between February 2016 and March 2017, a time of considerable organisational pressures. The evaluation findings may become less relevant if these pressures were to lift or the PET was implemented in a setting where they are less evident.
Finally, the survey analysis using SPC methods has several limitations. The baseline of 8 data points is small for the method as 20 data points is generally advocated,155 and the absence of a control makes it difficult to attribute causality between interventions and outcomes. In addition, a limited number of data meant that a detailed, ward-level analysis was not possible and the sheer complexity of the intervention made it difficult to annotate the SPC charts, adding to the difficulty of attributing causation.
Conclusion
This process evaluation suggests that the PET can be a useful guide for ward teams to work on PE if implemented as part of a broad improvement strategy. The qualitative findings provide insight into the factors that shaped the progress of the project. A central question they prompt is how the action researchers’ facilitation may be replicated to achieve success at scale. In addition, the qualitative and quantitative findings highlight the significance of context in determining how easy it is to implement the PET and the PE impacts arising from implementation. Although the SPC chart for all wards showed a decline in PE, it seems to have improved on the wards that fully implemented the PET. The relative absence of staff and service pressures on these wards may have eased the process of implementing the PET, enabling it to achieve its potential. This is significant because, to maximise the PE benefits of the PET, wards and even organisations may have to be preselected for its delivery.
Chapter 7 Discussion and conclusion
In this programme of work, our aim was to understand how PE feedback is currently used and why, based on the findings of previous research, it is not used routinely for the improvement of care on hospital wards. We then sought to move from this point to one where PE feedback is valued and used as a motivator for change. In the process of doing this work our objectives were to:
-
understand what PE measures are currently collected, collated and used to inform service improvement and care delivery (see Chapters 2 and 3)
-
co-design and implement a PET using an AR methodology (see Chapters 4 and 5)
-
conduct a process evaluation to identify transferable learning about how wards use the PET and the factors that influence this (see Chapter 6)
-
refine and disseminate the PET (see Chapters 4–7).
Each of the chapters includes a discussion of the findings therein. This final chapter attempts to draw these findings together to present the key take-home messages.
How each of the studies in this programme of work incrementally built on the former
The first two packages of work were undertaken via the scoping review and qualitative exploratory study. The scoping review findings were used to devise the topic guides for the focus groups and interviews (the fieldwork activities of the qualitative study). The interim findings of the qualitative work were written up into a short report for internal purposes. The research team and the design team met in summer 2016 to gain a collective understanding of the qualitative findings and how these findings would underpin the co-design process. This included a portion of the activities in the three co-design workshops being based on what had arisen from the qualitative findings. The co-design process produced the first prototype of the PET, which was then the basis for the 12 months of AR testing stage. The research team and the design team met regularly throughout these 12 months in order to continually refine the PET. A mixed-method process evaluation was embedded within the AR period. Learning from the AR and process evaluation fed into the refinement process.
Objective 1
Understand what PE measures are currently collected, collated and used to inform service improvement and care delivery (see Chapters 2 and 3).
Current patient experience feedback is not fit for purpose
Measuring PE can serve dual goals for ‘selection’ and ‘change’. 164 In the first instance, where performance on the measures is made public, together with comparisons with other providers, patients or their families can use the information collected to select where they would prefer to receive care and this, in turn, might serve to encourage higher performance on these measures. These data can also serve a reputational goal and feed into external review processes, such as those of the CQC in England.
In the second case, organisations, teams and individuals can use the feedback to motivate a change in how care is delivered. In our work, we identified a tension between the collection of data that serves the selection function and that which serves the change function. This is a finding reported in other studies that demonstrate a chasm between management and front-line staff whereby data are collected at an organisational level, but without a change plan and engagement of front-line clinicians this rarely leads to improvement. 92 For example, of the 38 different types of feedback we identified in our scoping review (see Chapter 2), none of the mandated measures were considered suitable for informing and monitoring local improvement. The data lack local relevance and they are not accessible in a timely manner. While the qualitative data collected via the FFT have the potential to inform QI, the current focus within the three hospitals we worked with on achieving acceptable response rates meant that using the data for change was rarely considered. Moreover, the number of data available to teams and the value of them varied hugely. In our A&E there was an abundance of data, but staff struggled to collate and use these data effectively. In an integrated care unit, turnover of patients was so slow that they felt that they had too few data to work with. Either way, the generic positive feedback, with many of the comments of the type ‘staff did a great job’, meant that there was little to guide improvements. The ethical implications of patients being asked to complete measures that are not then used to make improvements were noted by some. In general, our findings help to explain earlier contentions that there is little evidence that patient feedback about their experience of care has led to improvements in the quality of health care. 10
Our qualitative work (summarised in Chapter 3) also revealed that organisational structures within hospitals mean that teams are tasked with a narrow focus on an individual source of data (e.g. FFT, PALS, complaints). This impeded learning at both an organisational and a local level. We recommend that ideally these hospital functions should work more closely together. If this is not possible then there should, at the very least, be a strategic emphasis on learning across the organisation from the variety of feedback sources. In addition, of note from this qualitative work was the absence of a narrative about the value of PE data beyond the specific purpose of improving PE. Two systematic reviews, first Doyle et al. 7 and then Anhang Price et al.,165 have demonstrated that higher levels of PE are associated with greater patient adherence, best practice clinical processes, better safety culture and lower resource use, yet this is not articulated by those working in health care, at managerial level or at the front line.
Objective 2
Co-design and implement a PET using an AR methodology (see Chapters 4 and 5).
The process of co-design
In Chapters 4 and 5 we describe the co-design of a PET and the subsequent implementation of this toolkit in six different hospital departments using AR. In both designing and implementing the toolkit, we identified tensions, some of which echo ideas that had already arisen from the scoping and qualitative work. Before discussing these, it is important to reflect on the methodological tensions that we ourselves experienced when using co-design within a research project. Co-design aims to bring staff and patients together to reflect on their experiences and to develop together improvement priorities and ideas for change. However, within a research project, inevitably, the intervention design phase comes at a point after the research team have spent some time understanding the nature of the problem and considering potential solutions. This might mean that they have an important understanding and perspective, and yet staying close to co-design processes means that the voice of the researcher is not a central one. Those involved in co-design within research projects may need to develop a means by which they can optimise the co-design process by utilising this knowledge without risking the inherently user-centred nature of the co-design process. In the existing project, this compromise was achieved firstly by involving researchers as partners in the co-design process and secondly through meetings between the co-design team and the research team before and after each workshop. These meetings allowed for the earlier research findings to feed into ideas about how to structure the workshops and what fundamental conversations were important to encourage during these workshops.
Another important finding that has implications beyond this particular co-design project is the potential for the idealisation of the work context by staff involved. When staff are taken off the ward (or other care environment) and given some time and space to be involved in co-designing an intervention, which later they will deliver in the hustle and bustle of a busy ward, they are not necessarily able to anticipate the difficulties that they will face. Alternatively, they may ignore these challenges because they are fearful of admitting to them in a group setting, particularly in a group that includes patients. This has important implications for co-design. Staff may be engaging in designing an intervention for an idealised environment. For example, although staff in the co-design process suggested the need to bring a multidisciplinary team together to work on the PE project and to agree a set of principles in an initial meeting, this did not happen in practice. In practice, this stage was not perceived to be actively moving the project forward. It was also perceived to be unrealistic when multidisciplinary teams rarely came together at all for any other purpose.
In co-designing and implementing the toolkit, an interesting trade-off was acted out between the need for detail, because staff did not necessarily know how to collect useful feedback, how to measure for improvement, etc., and the need for brevity. Feedback from the teams was that the appearance of the product needed to ‘sell it’ and that it should not look like a major task to undertake. It needed to be punchy and bring the process alive and yet still contain all the useful resources that had been collated along the way. These conflicting aims meant that, in the final solution, the detailed toolkit was designed for use by the person facilitating the process and an overview was developed to introduce teams to the key steps in the process.
Data for improvement?
As we moved into the co-design phase we came to understand more about the huge volumes of PE feedback that staff are exposed to, but that are not in the same form as that required by external and governing bodies. Staff receive written and verbal compliments, they receive grumbles and complaints, they receive smiles and hugs, chocolates and cards from patients and families. In fact, dealing with this informal feedback, good and bad, is what they do; it is embedded within their work. In thinking about PE, therefore, nursing staff are focused on their current patients, those that they are caring for at the time. This is in direct contrast to the focus of the hospital management who produce PE feedback reports often based on the experience of previous patients who were cared for weeks, months or even years before. This disconnect may help to offer some explanation for the lack of engagement of staff with the PE feedback reported in previous studies. Although this feedback provides staff with a ‘feel’ for how patients and their families are experiencing the service, staff did recognise that it did not provide useful information that would guide improvement. In some of the wards we worked with, there was scant PE feedback available that could be used for improvement. The action researchers, who were able to respond creatively and flexibly to address this gap, worked with patient representatives to collect new information for the purposes of this project.
The emotional response to patient feedback
Throughout the co-design process, the importance of considering positive feedback was a central narrative. Staff expressed a concern about the relentlessly negative feedback they were exposed to and yet, in reality, much patient feedback was positive. They also discussed the need to counteract the human tendency to focus only on the negative. This need to celebrate what was good and the role that this could play in increasing staff morale was deemed to be critical within the highly pressured environments in which staff worked. In one department, staff referred to the use of positive FFTs in a staff huddle to help improve the mood of the team at the end of a shift. This became an important focus of the design of the prototype toolkit.
Dealing with the emotional response of staff to feedback also became something that the action researchers were acutely aware was one of their key roles. It soon became clear that staff did not have the time or the skills to collate their patient feedback and that they needed support to make sense of it. This task was completed by the action researchers who had by this time taken on a major facilitative role. Based on the co-design process, the presentation of feedback always included a strong emphasis on positive feedback. This helped to offer a form of reassurance to staff that they were doing something right. Despite this, staff focused on the negative because they saw this as the driver for improvement. It was not unusual for staff to become despondent, defensive, upset or frustrated when viewing the feedback. In most cases the feedback was not a surprise, but it upset them to hear that patients were bored and depressed, for example. The feedback served to increase cognitive dissonance,166 an uncomfortable feeling associated with knowing that what you want, or claim, to do is not what you actually do. Staff often knew that patients were not getting the best care, but with the pressures they were under and the resource constraints they experienced, they often felt helpless to address these problems. The facilitators were able to coach staff to work towards small changes, asking them to consider perhaps what they could do tomorrow, and this sense of the possible helped them to move beyond their initial emotional reaction. Facilitators also recognised the importance of celebrating success during the course of the implementation of the toolkit and they used certificates to keep staff on board and to assure them that they were doing well even in the face of adversity. Staff responded very positively to this encouragement and it was clear that positive reinforcement was not something that they routinely experienced in their work. This lends support to approaches for health-care improvement that attempt to identify and reward positive cultures and behaviours, such as Positive Deviance167 and Learning from Excellence. 168
Objective 3
Conduct a process evaluation to identify transferable learning about how wards use the PET and the factors that influence this (see Chapter 6).
Facilitation, not the toolkit, as the change mechanism
In a systematic review of approaches to using PE data, Gleeson11 reports ‘a lack of expertise in QI and confidence in interpreting patient experience data effectively’. Our evidence strongly supports this finding. Staff on the wards needed support to collate and interpret the data and to test out their ideas in PDSA cycles. They also needed help to access other functions or teams outside their own team to address problems that were not within their immediate control. This is linked to a second tension highlighted within the co-design process between bottom-up and top-down improvement and who owns the toolkit. Although this varied to some extent across the six teams we worked with, the teams came to rely on the facilitating role of the action researchers to guide them through the toolkit and to do some of the work (e.g. collating and thematising the qualitative comments) while they took ownership of the day-to-day work itself. The toolkit document itself acted as a cue to action and might have been a necessary precondition to engagement from the teams. However, it did not on its own support the continued engagement from teams.
For some teams, the facilitative role also helped them make links with other parts of the hospital infrastructure that were necessary to access the data that they needed or to make change happen. Within the Promoting Action on Research Implementation in Health Services (PARIHS) framework, the facilitator role is proposed to be central to the process of getting evidence into practice,169 in this case implementing a PET. The skills proposed by Rycroft-Malone169 and others for successful facilitation are co-counselling, critical reflection, giving meaning, flexibility of role and realness/authenticity. These skills were critical to the success of the current project and it is likely that if we had not had the opportunity to engage in facilitation that came with the AR method that we adopted, the toolkit would have been of little value to the ward teams. Flexibility, patience, creativity, pragmatism, persistence, understanding and responsiveness were the most important attributes demonstrated by our facilitators.
This finding echoes other recent work15 that identified that even when staff valued patient feedback (there was normative legitimacy) and they owned and were committed to a plan for change (there was structural legitimacy), without organisational readiness, teams were rarely able to enact the changes that they wanted to make. Within this project, the action researchers served this function. This research suggests that the most appropriate model for using the toolkit is one where external facilitators, with particular skills in QI and with the ability and legitimacy to draw on the services of other departments in the hospital, may be necessary to support teams in making improvements beyond those within their immediate control. Without this support, the patient voice is only heard by those who already have direct contact with patients and, it could be argued, already know how patients are experiencing their care. Thus, the findings here suggest that a bottom-up approach to data collection that is meaningful to staff and patients is most appropriate. The responsibility for acting on this information must be shared across the organisation, because this approach is most likely to both facilitate organisational learning and support change.
The process evaluation demonstrated that even with effective facilitation three of the teams did not complete each of the six steps from establishing a team through to measuring improvement. The greatest success in terms of improved PE is likely to be achieved when staff are highly engaged, multidisciplinary team structures already exist, extensive organisational support is in place and staff and service pressures are at a minimum. In situations where this was not the case, ward teams were not able to complete the six steps of the toolkit within a year and often had to stop and start, gaining and losing momentum a number of times. Our quantitative analysis, using SPC Charts, demonstrated that for those teams that did complete the entire process it was possible to demonstrate an improvement of 4% in negative PE, with scores dropping from 25% to 21%. Although these improvements were small, the questions in the survey deployed (a short version of the Picker PE survey) contained items that did not necessarily align with the changes that staff were choosing to focus on based on the collection of their own patient feedback.
Robustness of the research and limitations
The strengths and limitations of each of the studies conducted within this programme of work are set out in the individual chapters. Here, we discuss the overall approach. A key strength of this research and the AR design was the flexibility that this approach allowed us. Rather than developing a precisely specified intervention that was handed over to teams and evaluated at a distance, our approach meant that we were able to respond in a flexible way to the constraints of time and resources experienced by staff, as well as their different levels of skill and motivation. We could adapt and iterate the intervention, developing content for the toolkit that addressed existing gaps and needs of the teams. This meant that, even when teams were struggling to engage with the implementation process, they remained engaged in the project through a recognition that we were ‘on their side’ and that our aim was to make the toolkit work for them. Given the level of involvement of our action researchers, it was important that an independent evaluation was conducted alongside, and this did occur in a robust manner. However, it could be argued that being part of the wider research team and sharing an office with the other researchers in the team meant that it was not possible for the evaluator to be entirely objective in their work. This was tempered by meetings of the evaluator with the co-investigator (Laura Sheard) and the PI (Rebecca Lawton) who, being much more removed from the front-line teams, were able to prompt, question and probe so as to understand the emerging themes. This process of discussion about the way that the intervention was working and the role of facilitation led to a conceptual analysis of the nature of logic models for complex interventions of this kind. This was not anticipated in the original proposal and it represents a key strength of the work undertaken here.
Future research
The work described here addresses directly the question of whether or not the use and usefulness of PE feedback for hospital teams can be enhanced. The answer to this question is ‘yes’, but with some caveats. The first caveat is that it is not always possible for front-line teams to use the data that are currently collected (e.g. the Picker data and the FFT) to inform improvement activities. It may be necessary for teams to gather additional data that are timely and specific to their service. Currently data are gathered to feed a benchmarking or ‘selection’ aim rather than a strategic ‘change’ agenda. An obvious question prompted by this finding is whether or not a different type of data collection process that addresses the ‘change’ agenda at the front line might also serve the benchmarking goal at a senior level. The following research question needs to be addressed: which goal should be prioritised through the use of PE data?
Patient representatives who worked with us during this project acted to collect data, support and encourage staff to make changes and, in some cases, pushed for change. There is huge potential for patient volunteers to become more involved in the process of gathering patient feedback and supporting improvement initiatives. However, without the will and resources to recruit, train and support these volunteers, the benefits of their involvement are unlikely to be realised. Our programme of work did not set out to evaluate the involvement of patient representatives in delivering PE projects in health care. Future research is necessary to better understand the impact of patient volunteer involvement on the use of PE feedback to make improvements and whether or not involvement of this kind is cost-effective.
Within this programme of work we co-designed a PET for use in hospital wards. As the toolkit provides a blueprint for how to work with PE data as a multidisciplinary team and how to use this information to inform and evaluate improvements, it is entirely plausible that with local adaptations the toolkit would be applicable in general practice, community services, social care or other settings. However, this is yet to be evaluated. Therefore, we would recommend that proposed future research looks to implement and evaluate use of the toolkit (and its inherent facilitation) across a range of different settings to understand wider transferability. The importance of facilitation in our work implies that the mechanisms for change are embedded mostly in the skills and abilities of these people to encourage and support teams. It would seem prudent to propose that an area for future research could be a close examination of the contextual preconditions where the PET is successfully adopted.
Our own plans for future research will focus specifically on the role of local facilitators and how they can be supported to implement the toolkit. We have submitted two funding applications to the Health Foundation – Innovating for Improvement and the ‘Q Exchange’ scheme. Both will explore how local facilitators can be supported to coach health-care teams to work through the six steps of the toolkit and whether or not this is a viable approach to the wider implementation of the toolkit. The Point of Care Foundation, the Improvement Academy for Yorkshire and Humber and NHS England have all expressed a desire to work with us to deliver this research.
Implications for health care
-
At present, front-line staff do not appear to have the skills or the time to process the data from different sources.
-
Data about PE of care are processed within different hospital departments (e.g. complaints team, PE team, improvement team). This means that there is little ‘organisational-level learning’ and that clinical teams receive multiple sources of data that they struggle to deal with.
-
Time spent on the collection of data for ‘selection’ or ‘reputational’ purposes is perceived largely to be a waste of time by front-line staff.
-
Mandated PE measures do not provide timely or meaningful information that front-line teams feel they can act on to make improvements.
-
Staff have an emotional response to PE feedback that must be acknowledged within the improvement process.
-
Without facilitation from a skilled person who is enabled and trusted to work with the health-care teams to implement the toolkit, improvement is unlikely to be demonstrated.
Conclusion
This exciting programme of work is the first to test a co-designed PET, the Yorkshire PET, with hospital teams in the UK. The learning generated from the process of developing and implementing the toolkit has important implications for research, policy and practice. Specifically, the current focus in England on collecting data for ‘selection’ and ‘reputational’ purposes rather than for ‘change’ means that using the data for improvement is often a secondary consideration. Staff struggle emotionally and practically to respond to data collected from patients about their experience, and they need external facilitation to help them to use this information to make improvements. However, the data collected here demonstrate that, when staff do become involved in a project of this kind, they find it engaging and rewarding and it does lead to changes in the way that they do things and improvements in the experience of patients.
Acknowledgements
We would like to thank the health-care staff on the six wards who were involved in the study. Without their involvement, this study would have not been possible. We are mindful of how much pressure they were continually under during the working day and we cannot express how much we appreciate their sustained, long-term involvement in this study.
Equally, the health-care managers (at all three NHS trusts) that we met during this study who were part of the PE, QI, risk management or complaints teams. Many of them not only assisted the research team in finding their feet within the organisation, but also, in some cases, propelled the research study forward. Credit is also due to the senior management at each of the three trusts who gave permission for us to conduct the research within their hospital sites. Owing to the preservation of anonymity, we cannot state the sites or individual staff names.
This study would have been impossible without the strong patient representation that was given so diligently from all eight patient representatives, who worked tirelessly on this study in advisory capacities and in some cases as active participants. They are (in alphabetical order) Richard Eastoe, Philip Elphick, Sheila Furness, Saleena Khatun, Linda Lovett, Pat Newdall, Wendy Paley and Fran Senior. It goes without saying that their input helped guide and shape the course of the study. We particularly appreciate the data collection role, which some took on in order to provide richer and more recent feedback from current patients to ward staff who were finding their existing data sources difficult to work with.
Members of the quarterly steering group were integral to the direction of the study and kept a keen eye on progress, milestones and outcomes. Thanks to (in alphabetical order) Shelley Bailey, Chris Brown, Paul Chamberlain, Ann Dell, Rosie Horsman, Tahir Idris, Mel Jackson, Krystina Kozlowska, Alison Lovatt, Jane O’Hara, Maggie Peat, Alun Pymer, Ann-Marie Riley, Beverley Slater, Peter Walsh and Sally-Anne Wilson.
An informal ‘learning set’ arose from discussions among the researchers who were leading programmes of work, whose projects were all funded under the same Health Services and Delivery Research PE call. This developed into an opportunity for researchers encountering similar issues to get together in person twice a year but also to offer support to each other via e-mail and over the telephone. We would particularly like to thank Louise Locock (Professor, University of Aberdeen), Caroline Sanders (Reader, University of Manchester) and Glenn Robert (Professor, King’s College London) for their interest in bringing the people from different studies together in order for the research activity to become more than the sum of its parts.
Other contributors whom we would like to acknowledge are:
-
Members of the Improvement Academy who provided dialogue and advice, particularly around the improvement science aspects of the testing phase: Alison Lovatt and Beverley Slater.
-
Those researchers in the Yorkshire Quality and Safety research group who collected quantitative survey data from patients’ bedsides on a weekly basis: Caroline Reynolds, Sally Moore, Lesley Hughes, Gail Opio-Te, Lucy Mitchinson, Mayur Parmar and Sophie Skeggs.
-
Those who did preparatory work on the design process: Matt Dexter and Paul Chamberlain.
-
Those who did preparatory work on the scoping review: Lesley Hughes and Kat Melling.
-
Michael Rooney who provided statistical advice on the survey results.
-
Those who provided administrative support on the preparation of the final report: Cathy Hulin, Caroline Reynolds and Carolyn Clover.
-
Jane O’Hara for providing ongoing support via discussions about PE.
This research was supported by the NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRC) Yorkshire and Humber [www.clahrc-yh.nihr.ac.uk (accessed 16 September 2019)].
Contributions of authors
Laura Sheard (Principal Research Fellow, health sociologist) was Study Manager, which involved leading the project on a day-to-day basis, keeping it to time, task and budget. She contributed to the overall conception of the work and designed the qualitative exploratory study and process evaluation. She analysed qualitative data, drafted Chapter 3 and led on producing the final report.
Claire Marsh (Senior Research Fellow, interdisciplinary health researcher) led the AR stage of the study and was the lead for the patient involvement throughout the whole programme of work. She contributed to the overall conception of the work and designed the AR study. She co-analysed the AR data and drafted Chapters 2 and 5.
Thomas Mills (Research Fellow, health researcher) collected all qualitative data for the process evaluation and co-ordinated the collection of all quantitative data. Thomas analysed both data sets. He drafted Chapter 6.
Rosemary Peacock (Senior Research Fellow, health researcher) collected all data for the qualitative exploratory study and was the researcher for the AR study. Rosemary assisted with analysis for the qualitative exploratory study and co-analysed the AR data.
Joseph Langley (Principal Research Fellow, design engineer and knowledge mobiliser) led the second half of the co-design process and co-drafted Chapter 4. He also led on the iterative prototyping of the toolkit during the testing year.
Rebecca Partridge (Researcher, design) was researcher in the co-design process, with a focus on visualising and producing the toolkit materials. She co-drafted Chapter 4.
Ian Gwilt (Professor, designer and researcher) led the first half of the co-design process, developed the first prototype of the toolkit and co-drafted Chapter 4.
Rebecca Lawton (Professor in Psychology of Healthcare) was the PI and had overall responsibility for the study. Rebecca chaired the steering committee meetings and contributed to the overall conception of the work. She drafted Chapters 1 and 7.
All authors reviewed all chapters and were involved in critically revising the final report for important intellectual content.
Publications
Mills T, Lawton R, Sheard L. Improving patient experience in hospital settings: assessing the role of toolkits and action research through a process evaluation of a complex intervention [published online ahead of print June 16 2019]. Qual Health Res 2019.
Mills T, Lawton R, Sheard L. Advancing complexity science in healthcare research: the logic of logic models. BMC Med Res 2019;19:55.
Sheard L, Peacock R, Marsh C, Lawton R. What’s the problem with patient experience feedback? A macro and micro understanding, based on findings from a three-site UK qualitative study. Health Expect 2019;22:46–53.
Marsh C, Peacock R, Sheard L, Hughes L, Lawton R. Patient experience feedback in UK hospitals: what types are available and what are their potential roles in quality improvement (QI)? Health Expect 2019;22:317–26.
Data-sharing statement
All qualitative data generated that can be shared are contained within the report. All data queries and requests should be submitted to the corresponding author for consideration.
Patient data
This work uses data provided by patients and collected by the NHS as part of their care and support. Using patient data is vital to improve health and care for everyone. There is huge potential to make better use of information from people’s patient records, to understand more about disease, develop new treatments, monitor safety, and plan NHS services. Patient data should be kept safe and secure, to protect everyone’s privacy, and it’s important that there are safeguards to make sure that it is stored and used responsibly. Everyone should be able to find out about how patient data are used. #datasaveslives You can find out more about the background to this citation here: https://understandingpatientdata.org.uk/data-citation.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care.
References
- Care Opinion 2016. www.careopinion.org.uk (accessed 13 July 2018).
- Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med 2013;368:201-3. https://doi.org/10.1056/NEJMp1211775.
- Ward JK, Armitage G. Can patients report patient safety incidents in a hospital setting? A systematic review. BMJ Qual Saf 2012;21:685-99. https://doi.org/10.1136/bmjqs-2011-000213.
- Francis R. Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry 2013.
- Berwick DM. A Promise to Learn – A Commitment to Act: Improving the Safety of Patients in England. London: Department of Health and Social Care; 2013.
- Keogh B. Review into the Quality of the Care and Treatment Provided by 14 Hospitals in England. London: The Stationery Office; 2013.
- Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open 2013;3. https://doi.org/10.1136/bmjopen-2012-001570.
- Trzeciak S, Gaughan JP, Bosire J, Mazzarelli AJ. Association Between Medicare Summary Star Ratings for Patient Experience and Clinical Outcomes in US Hospitals. J Patient Exp 2016;3:6-9. https://doi.org/10.1177/2374373516636681.
- NHS Choices . What Is PALS (Patient Advice and Liaison Service)? n.d. www.nhs.uk/chq/Pages/1082.aspx (accessed 13 July 2018).
- Coulter A, Locock L, Ziebland S, Calabrese J. Collecting data on patient experience is not enough: they must be used to improve care. BMJ 2014;348. https://doi.org/10.1136/bmj.g2225.
- Gleeson H, Calderon A, Swami V, Deighton J, Wolpert M, Edbrooke-Childs J. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open 2016;6. https://doi.org/10.1136/bmjopen-2016-011907.
- Wolf JA, Niederhauser V, Marshburn D, LaVela S. Defining patient experience. Patient Exp J 2014;1:7-19. https://doi.org/10.35680/2372-0247.1000.
- LaVela S, Gallan A. Evaluation and measurement of patient experience. Patient Exp J 2014;1:28-36. https://doi.org/10.35680/2372-0247.1003.
- Vincent C, Burnett S, Carthey J. The Measurement and Monitoring of Safety. London: The Health Foundation; 2013.
- Sheard L, Marsh C, O’Hara J, Armitage G, Wright J, Lawton R. The Patient Feedback Response Framework: understanding why UK hospital staff find it difficult to make improvements based on patient feedback – a qualitative study. Soc Sci Med 2017;178:19-27. https://doi.org/10.1016/j.socscimed.2017.02.005.
- Gkeredakis E, Swan J, Powell J, Nicolini D, Scarbrough H, Roginski C, et al. Mind the gap: understanding utilisation of evidence and policy in health care management practice. J Health Organ Manag 2011;25:298-314. https://doi.org/10.1108/14777261111143545.
- The Health Foundation . Measuring Patient Experience: Evidence Scan 2013.
- Lawton R, McEachan RR, Giles SJ, Sirriyeh R, Watt IS, Wright J. Development of an evidence-based framework of factors contributing to patient safety incidents in hospital settings: a systematic review. BMJ Qual Saf 2012;21:369-80. https://doi.org/10.1136/bmjqs-2011-000443.
- Lawton R, O’Hara JK, Sheard L, Armitage G, Cocks K, Buckley H, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf 2017;26:622-31. https://doi.org/10.1136/bmjqs-2016-005570.
- Archer B. The Nature of Research. CoDesign 1995:6-13.
- Dixon-Woods M, Baker R, Charles K, Dawson J, Jerzembek G, Martin G, et al. Culture and behaviour in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf 2014;23:106-15. https://doi.org/10.1136/bmjqs-2013-001947.
- Reeves R, West E, Barron D. Facilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial. BMC Health Serv Res 2013;13. https://doi.org/10.1186/1472-6963-13-259.
- Slater BL, Lawton R, Armitage G, Bibby J, Wright J. Training and action for patient safety: embedding interprofessional education for patient safety within an improvement methodology. J Contin Educ Health Prof 2012;32:80-9. https://doi.org/10.1002/chp.21130.
- Marsh C, Peacock R, Sheard L, Hughes L, Lawton R. Patient experience feedback in UK hospitals: what types are available and what are their potential roles in quality improvement (QI)?. Health Expect 2019;22:317-26. https://doi.org/10.1111/hex.12885.
- Churchill N. Making the Most of Feedback n.d. www.england.nhs.uk/blog/making-the-most-of-feedback/ (accessed 13 July 2018).
- Robert G, Cornwell J, Black N. Friends and family test should no longer be mandatory. BMJ 2018;360. https://doi.org/10.1136/bmj.k367.
- Gillespie A, Reader TW. The Healthcare Complaints Analysis Tool: development and reliability testing of a method for service monitoring and organisational learning. BMJ Qual Saf 2016;25:937-46. https://doi.org/10.1136/bmjqs-2015-004596.
- de Vos MS, Hamming JF, Marang-van de Mheen PJ. The problem with using patient complaints for improvement. BMJ Qual Saf 2018;27:758-76210. https://doi.org/10.1136/bmjqs-2017-007463.
- Griffiths A, Leaver MP. Wisdom of patients: predicting the quality of care using aggregated patient feedback. BMJ Qual Saf 2018;27:110-18. https://doi.org/10.1136/bmjqs-2017-006847.
- Donetto S, Tsianakas V, Robert G. Using Experience-based Co-design to Improve the Quality of Healthcare: Mapping Where We Are Now and Establishing Future Directions. London: King’s College, London; 2014.
- Solberg LI, Mosser G, McDonald S. The three faces of performance measurement: improvement, accountability, and research. Jt Comm J Qual Improv 1997;23:135-47. https://doi.org/10.1016/S1070-3241(16)30305-4.
- Raleigh V, Thompson J, Jabbal J, Graham C, Sizmur S, Coulter A. Patients’ Experiences of Using Hospital Services. London: The King’s Fund/Picker Institute Europe; 2015.
- de Silva D. Measuring Patient Experience. London: The Health Foundation; 2013.
- Beattie M, Murphy DJ, Atherton I, Lauder W. Instruments to measure patient experience of healthcare quality in hospitals: a systematic review. Syst Rev 2015;4. https://doi.org/10.1186/s13643-015-0089-0.
- Edwards KJ, Walker K, Duff J. Instruments to measure the inpatient hospital experience: a literature review. Patient Exp J 2015;2:77-85. https://doi.org/10.35680/2372-0247.1088.
- Gibbons EJ, Graham C, Flott KM, Jenkinson C, Fitzpatrick R. Developing approaches to the collection and use of evidence of patient experience below the level of national surveys. Patient Exp J 2016;3:92-100. https://doi.org/10.35680/2372-0247.1118.
- Flott KM, Graham C, Darzi A, Mayer E. Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed. BMJ Qual Saf 2017;26:502-7. https://doi.org/10.1136/bmjqs-2016-005223.
- Entwistle V, Firnigl D, Ryan M, Francis J, Kinghorn P. Which experiences of health care delivery matter to service users and why? A critical interpretive synthesis and conceptual map. J Health Serv Res Policy 2012;17:70-8. https://doi.org/10.1258/jhsrp.2011.011029.
- van der Vleuten CP, Schuwirth LW. Assessing professional competence: from methods to programmes. Med Educ 2005;39:309-17. https://doi.org/10.1111/j.1365-2929.2005.02094.x.
- Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005;8:19-32. https://doi.org/10.1080/1364557032000119616.
- Mumsnet 2016. www.mumsnet.com (accessed 13 July 2018).
- Coulter A, Cleary PD. Patients’ experiences with hospital care in five countries. Health Aff 2001;20:244-52. https://doi.org/10.1377/hlthaff.20.3.244.
- NHS Surveys 2016. www.nhssurveys.org/surveys (accessed 13 July 2018).
- Scottish Government . Inpatient Experience Survey 2016 n.d. www.gov.scot/Topics/Statistics/Browse/Health/InpatientSurvey (accessed 13 July 2018).
- Department of Health Social Services and Public Safety (Northern Ireland) 2016. www.health-ni.gov.uk/sites/default/files/publications/dhssps/inpatient-patient-experience-survey-2014.pdf (accessed 13 July 2018).
- NHS Wales . NHS Complaints n.d. www.wales.nhs.uk/ourservices/contactus/nhscomplaints (accessed 13 July 2018).
- Bos N, Sizmur S, Graham C, van Stel HF. The accident and emergency department questionnaire: a measure for patients’ experiences in the accident and emergency department. BMJ Qual Saf 2013;22:139-46. https://doi.org/10.1136/bmjqs-2012-001072.
- Scottish Government . Maternity Survey 2015. www.gov.scot/Topics/Statistics/Browse/Health/maternitysurvey (accessed 13 July 2018).
- Jones D, Lester C. Hospital care and discharge: patients’ and carers’ opinions. Age Ageing 1994;23:91-6. https://doi.org/10.1093/ageing/23.2.91.
- Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries. Int J Qual Health Care 2002;14:353-8. https://doi.org/10.1093/intqhc/14.5.353.
- Hewitson P, Skew A, Graham C, Jenkinson C, Coulter A. People with limiting long-term conditions report poorer experiences and more problems with hospital care. BMC Health Serv Res 2014;14. https://doi.org/10.1186/1472-6963-14-33.
- Thomas LH, McColl E, Priest J, Bond S, Boys RJ. Newcastle satisfaction with nursing scales: an instrument for quality assessments of nursing care. Qual Health Care 1996;5:67-72. https://doi.org/10.1136/qshc.5.2.67.
- Evans J, Rose D, Flach C, Csipke E, Glossop H, McCrone P, et al. VOICE: developing a new measure of service users’ perceptions of inpatient care, using a participatory methodology. J Ment Health 2012;21:57-71. https://doi.org/10.3109/09638237.2011.629240.
- Murrells T, Robert G, Adams M, Morrow E, Maben J. Measuring relational aspects of hospital care in England with the ‘Patient Evaluation of Emotional Care during Hospitalisation’ (PEECH) survey questionnaire. BMJ Open 2013;3. https://doi.org/10.1136/bmjopen-2012-002211.
- Rattray J, Johnston M, Wildsmith JA. The intensive care experience: development of the ICE questionnaire. J Adv Nurs 2004;47:64-73. https://doi.org/10.1111/j.1365-2648.2004.03066.x.
- Fitzpatrick R, Graham C, Gibbons EJ, King E, Flott KM, Jenkinson C. Development of New Models for Collection and Use of Patient Experience Information in the NHS: PRP 0700074, Final Report 2014.
- Knowles E, O’Cathain A, Nicholl J. Patients’ experiences and views of an emergency and urgent care system. Health Expect 2012;15:78-86. https://doi.org/10.1111/j.1369-7625.2010.00659.x.
- Baker R, Preston C, Cheater F, Hearnshaw H. Measuring patients’ attitudes to care across the primary/secondary interface: the development of the patient career diary. Qual Health Care 1999;8:154-60. https://doi.org/10.1136/qshc.8.3.154.
- England N. NHS Complaints 2016. www.wales.nhs.uk/ourservices/contactus/nhscomplaints (accessed 13 July 2018).
- Scottish Government . Complain About an NHS Service n.d. www.mygov.scot/nhs-complaints (accessed 15 November 2018).
- niDirect . Make a Complaint Against the Health Service n.d. www.nidirect.gov.uk/articles/make-complaint-against-health-service (accessed 13 July 2018).
- The Scottish Association of Citizens Advice Bureaux . Patient Advice and Support Service n.d. www.cas.org.uk/pass (accessed 13 July 2018).
- NHS Wales . Patient Support and Advisory Service 2016. www.wales.nhs.uk/sitesplus/862/page/65382 (accessed 13 July 2018).
- Health and Social Care Northern Ireland . Patient and Client Council Northern Ireland n.d. www.patientclientcouncil.hscni.net (accessed 13 July 2018).
- IWantGreatCare 2016. www.iwantgreatcare.org/ (accessed 13 July 2018).
- Tsianakas V, Maben J, Wiseman T, Robert G, Richardson A, Madden P, et al. Using patients’ experiences to identify priorities for quality improvement in breast cancer care: patient narratives, surveys or both?. BMC Health Serv Res 2012;12. https://doi.org/10.1186/1472-6963-12-271.
- Bridges J, Gray W, Box G, Machin S. Discovery Interviews: a mechanism for user involvement. Int J Older People Nurs 2008;3:206-10. https://doi.org/10.1111/j.1748-3743.2008.00128.x.
- Baron S. Evaluating the patient journey approach to ensure health care is centred on patients. Nurs Times 2009;105:20-3.
- NHS Institute for Innovation and Improvement . Kinda Magic Peninsula Community Health Community Interest Company 2013.
- Locock L, Robert G, Boaz A, Vougioukalou S, Shuldham C, Fielden J, et al. Testing accelerated experience-based co-design: a qualitative study of using a national archive of patient experience narrative interviews to promote rapid patient-centred service improvement. Health Serv Deliv Res 2014;2. https://doi.org/10.3310/hsdr02040.
- NHS England . The 15 Steps Challenge Toolkit 2016. www.england.nhs.uk/participation/resources/15-steps-challenge/ (accessed 13 July 2018).
- NHS Education for Scotland . Patient Safety and Clinical Skills 2017. www.nes.scot.nhs.uk/education-and-training/by-theme-initiative/patient-safety-and-clinical-skills/always-events.aspx (accessed 13 July 2018).
- NHS England . Friends and Family Test Data 2014. www.england.nhs.uk/fft/friends-and-family-test-data/ (accessed 24 September 2019).
- Benson T, Potts HW. A short generic patient experience questionnaire: howRwe development and validation. BMC Health Serv Res 2014;14. https://doi.org/10.1186/s12913-014-0499-z.
- NHS England . How to Complain to the NHS 2016. www.nhs.uk/NHSEngland/complaints-and-feedback/Pages/nhs-complaints.aspx (accessed 13 July 2018).
- Reeves R, Seccombe I. Do patient surveys work? The influence of a national survey programme on local quality-improvement initiatives. Qual Saf Health Care 2008;17:437-41. https://doi.org/10.1136/qshc.2007.022749.
- Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Harnessing the cloud of patient experience: using social media to detect poor quality healthcare. BMJ Qual Saf 2013;22:251-5. https://doi.org/10.1136/bmjqs-2012-001527.
- Bate P, Robert G. Bringing User Experience to Healthcare Improvement: The Concepts, Methods and Practices of Experience-Based Design. Oxford: Radcliffe Publishing; 2007.
- Martin GP, McKee L, Dixon-Woods M. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety. Soc Sci Med 2015;142:19-26. https://doi.org/10.1016/j.socscimed.2015.07.027.
- Gilbert D, Goodman N. Making Sense and Making Use of Patient Experience Data. Membership Engagement Services and InHealth Associates 2015. www.membra.co.uk/wp-content/uploads/2018/04/FINAL-REPORT-DIGITAL-NEW-BRANDING.pdf (accessed 24 September 2019).
- Sheard L, Peacock R, Marsh C, Lawton R. What’s the problem with patient experience feedback? A macro and micro understanding, based on findings from a three-site UK qualitative study. Health Expect 2019;22:46-53. https://doi.org/10.1111/hex.12829.
- NHS England . Friends and Family Test 2014. www.england.nhs.uk/ourwork/pe/fft (accessed 13 July 2018).
- NHS Confederation . Feeling Better? Improving Patient Experience in Hospital 2010.
- Locock L, Robert G, Boaz A, Vougioukalou S, Shuldham C, Fielden J, et al. Using a national archive of patient experience narratives to promote local patient-centered quality improvement: an ethnographic process evaluation of ‘accelerated’ experience-based co-design. J Health Serv Res Policy 2014;19:200-7. https://doi.org/10.1177/1355819614531565.
- Lee R, Baeza J, Fulop N. The use of patient feedback by hospital boards of directors: a qualitative study of two NHS hospitals in England. BMJ Qual Saf 2018;27:103-9.
- Jones L, Pomeroy L, Robert G, Burnett S, Anderson JE, Fulop NJ. How do hospital boards govern for quality improvement? A mixed methods study of 15 organisations in England. BMJ Qual Saf 2017;26:978-86. https://doi.org/10.1136/bmjqs-2016-006433.
- Robert G, Cornwell J. Rethinking policy approaches to measuring and improving patient experience. J Health Serv Res Pol 2013;18:67-9. https://doi.org/10.1177/1355819612473583.
- Adams M, Maben J, Robert G. ‘It’s sometimes hard to tell what patients are playing at’: how healthcare professionals make sense of why patients and families complain about care. Health 2017;22:603-2. https://doi.org/10.1177/1363459317724853.
- Sheard L, Marsh C, O’Hara J, Armitage G, Wright J, Lawton R. Exploring how ward staff engage with the implementation of a patient safety intervention: a UK-based qualitative process evaluation. BMJ Open 2017;7. https://doi.org/10.1136/bmjopen-2016-014558.
- Guest G, MacQueen KM, Namey EE. Applied Thematic Analysis. London: SAGE Publications; 2012.
- Waring J, Crompton A. A ‘movement for improvement’? A qualitative study of the adoption of social movement strategies in the implementation of a quality improvement campaign. Sociol Health Illn 2017;39:1083-99. https://doi.org/10.1111/1467-9566.12560.
- Rozenblum R, Lisby M, Hockey PM, Levtzion-Korach O, Salzberg CA, Efrati N, et al. The patient satisfaction chasm: the gap between hospital management and frontline clinicians. BMJ Qual Saf 2013;22:242-50. https://doi.org/10.1136/bmjqs-2012-001045.
- Braithwaite J, Churruca K, Ellis LA. Can we fix the uber-complexities of healthcare?. J R Soc Med 2017;110:392-4. https://doi.org/10.1177/0141076817728419.
- Improvement Academy . Yorkshire Patient Experience Toolkit 2019. https://improvementacademy.org/tools-and-resources/the-yorkshire-patient-experience-toolkit.html (accessed 25 September 2019).
- Sanders EBN, Stappers PJ. Co-creation and the new landscapes of design. CoDesign 2008;4:5-18. https://doi.org/10.1080/15710880701875068.
- Horne M, Khan H, Corrigan P. People Powered Health: Health For People, By People and With People 2013. www.nesta.org.uk/sites/default/files/health_for_people_by_people_and_with_people.pdf (accessed 13 July 2018).
- Design Council . Why Encouraging NHS Staff to Think Differently Is Good for the Nation’s Health 2008.
- Cottam H, Leadbeater C. Red Paper 01. Health Co-Creating Services 2004. www.designcouncil.org.uk/sites/default/files/asset/document/red-paper-health.pdf (accessed 13 July 2018).
- Manzini E. Designing coalitions: design for social forms in a fluid world. Strategic Des Res J 2017;10:187-93. https://doi.org/10.4013/sdrj.2017.102.12.
- Chamberlain P, Wolstenholme D, Dexter M. The State of the Art of Design Theory and Practice in Health: An Expert-Led Review of the Extent of the Art and Design Theory and Practice in Health and Social Care 2015. https://shura.shu.ac.uk/10634/ (accessed 13 July 2018).
- Muratovski G. Research for Designers: A Guide to Methods and Practice. London: SAGE Publications; 2016.
- Sousanis N. Unflattening. Cambridge, MA: Harvard University Press; 2015.
- James A. Innovative Pedagogies Series: Innovating in the Creative Arts With LEGO – Transforming Teaching; Inspiring Learning 2015. www.heacademy.ac.uk/innovating-creative-arts-lego (accessed 13 July 2018).
- Cooke J, Langley J, Wolstenholme D, Hampshaw S. ‘Seeing’ the difference: the importance of visibility and action as a mark of ‘authenticity’ in co-production – comment on ‘collaboration and co-production of knowledge in healthcare: opportunities and challenges’. Int J Health Policy Manage 2017;5:1-4. https://doi.org/10.15171/ijhpm.2016.136.
- Donetto S, Pierri P, Tsianakas V, Robert G. Experience-based co-design and healthcare improvement: realizing participatory design in the public sector. The Design J 2015;18:227-48. https://doi.org/10.2752/175630615X14212498964312.
- Bowen S, McSeveny K, Lockley E, Wolstenholme D, Cobb M, Dearden A. How was it for you? Experiences of participatory design in the UK health service. CoDesign 2013;9:230-46. https://doi.org/10.1080/15710882.2013.846384.
- Star SL, Griesemer J. Institutional ecology, ‘translations’ and boundary objects: amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology 1907–39; 1989. Soc Stud Sci 1989;19:387-420. https://doi.org/10.1177/030631289019003001.
- Henderson K. Flexible sketches and inflexible data bases: visual communication, conscription devices, and boundary objects in design engineering. Sci Technol Hum Val 1991;16:448-73. https://doi.org/10.1177/016224399101600402.
- Leigh Star S. This is not a boundary object: reflections on the origin of a concept. Sci Technol Hum Val 2010;35:601-17. https://doi.org/10.1177/0162243910377624.
- Carlile PR. A pragmatic view of knowledge and boundaries: boundary objects in new product development. Org Sci 2002;13:442-55. https://doi.org/10.1287/orsc.13.4.442.2953.
- Carlile PR. Transferring, translating, and transforming: an integrative framework for managing knowledge across boundaries. Org Sci 2004;15:555-68. https://doi.org/10.1287/orsc.1040.0094.
- Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving research impact through co-creation in community-based health services: literature review and case study. MILBANK Q 2016;94:392-429. https://doi.org/10.1111/1468-0009.12197.
- Bate P, Robert G, Banaszak-Holl J, Levitsky SR, Zald M. Social Movements and the Transformation of American Health Care. New York, NY: Oxford University Press; 2010.
- Plsek P, Bibby J, Whitby E. Practical methods for extracting explicit design rules grounded in the experience of organizational managers. J Appl Behav Sci 2007;43:153-70. https://doi.org/10.1177/0021886306297013.
- Firth-Cozens J. Cultures for improving patient safety through learning: the role of teamwork. Qual Health Care 2001;10:ii26-31. https://doi.org/10.1136/qhc.0100026.
- Wilkinson J, Powell A, Davies H. Are Clinicians Engaged in Quality Improvement? A Review of the Literature on Healthcare Professionals’ Views on Quality Improvement Initiatives. London: The Health Foundation; 2011.
- Davies C. Getting health professionals to work together: there’s more to collaboration than simply working side by side. BMJ 2000;320:1021-2. https://doi.org/10.1136/bmj.320.7241.1021.
- Krogstad U, Hofoss D, Hjortdahl P. Doctor and nurse perception of inter-professional co-operation in hospitals. Int J Qual Health Care 2004;16:491-7. https://doi.org/10.1093/intqhc/mzh082.
- Ocloo J, Matthews R. From tokenism to empowerment: progressing patient and public involvement in healthcare improvement. BMJ Qual Saf 2016;25:626-32. https://doi.org/10.1136/bmjqs-2015-004839.
- INVOLVE . Briefing Notes for Researchers: Involving the Public in NHS, Public Health and Social Care Research 2012.
- NHS England . Insight 2018. www.england.nhs.uk/ourwork/insight/ (accessed 13 July 2018).
- Berwick DM. What ‘patient-centered’ should mean: confessions of an extremist. Health Aff 2009;28:w555-65. https://doi.org/10.1377/hlthaff.28.4.w555.
- Institute for Healthcare Improvement . How to Improve 2018. www.ihi.org/resources/Pages/HowtoImprove/default.aspx (accessed 13 July 2018).
- Cohn S. ‘Trust my doctor, trust my pancreas’: trust as an emergent quality of social practice. Philos Ethics Humanit Med 2015;10. https://doi.org/10.1186/s13010-015-0029-6.
- Pflueger D. Accounting for quality: on the relationship between accounting and quality improvement in healthcare. BMC Health Serv Res 2015;15. https://doi.org/10.1186/s12913-015-0769-4.
- Reason P, Bradbury H, Reason P, Bradbury H. The Handbook of Action Research: Participative Inquiry in Practice. London: SAGE Publications; 2009.
- Fletcher AJ, MacPhee M, Dickson G. Doing participatory action research in a multicase study: a methodological example. Int J Qual Methods 2015;14. https://doi.org/10.1177/1609406915621405.
- Coghlan D, Brannick T. Doing Action Research in Your Own Organization. London: SAGE Publications; 2009.
- Montgomery A, Doulougeri K, Panagopoulou E. Implementing action research in hospital settings: a systematic review. J Health Organ Manag 2015;29:729-49. https://doi.org/10.1108/JHOM-09-2013-0203.
- Labonte R, Feather J, Hills M. A story/dialogue method for health promotion knowledge development and evaluation. Health Educ Res 1999;14:39-50. https://doi.org/10.1093/her/14.1.39.
- McNiff J. The Privatisation of Action Research 2003. www.jeanmcniff.com/items.asp?id=84&term=privatisation+of (accessed 1 March 2018).
- Pope C, Mays N. Qualitative Research in Health Care. London: BMJ books; 1999.
- Blumer H. What is wrong with social theory?. Am Sociol Rev 1954;19:3-10. https://doi.org/10.2307/2088165.
- Entwistle VA, Watt IS. Treating patients as persons: a capabilities approach to support delivery of person-centered care. Am J Bioeth 2013;13:29-3. https://doi.org/10.1080/15265161.2013.802060.
- Williams P. The competent boundary spanner. Public Admin 2002;80:103-24. https://doi.org/10.1111/1467-9299.00296.
- Braithwaite J. Between-group behaviour in health care: gaps, edges, boundaries, disconnections, weak ties, spaces and holes: a systematic review. BMC Health Serv Res 2010;10. https://doi.org/10.1186/1472-6963-10-330.
- Bate P, Robert G, Fulop N, Øvretveit J, Dixon-Woods M. Perspectives on Context: A Selection of Essays Considering the Role of Context in Successful Quality Improvement. London: The Health Foundation; 2014.
- Reed JE, Card AJ. The problem with plan-do-study-act cycles. BMJ Qual Saf 2016;25:147-52. https://doi.org/10.1136/bmjqs-2015-005076.
- Smith E. Review of Centrally Funded Improvement and Leadership Development Functions. NHS England: NHS England Publications Gateway Reference 03858; 2015.
- Parker LE, Kirchner JE, Bonner LM, Fickel JJ, Ritchie MJ, Simons CE, et al. Creating a quality-improvement dialogue: utilizing knowledge from frontline staff, managers, and experts to foster health care quality improvement. Qual Health Res 2009;19:229-42. https://doi.org/10.1177/1049732308329481.
- Armstrong N, Brewster L, Tarrant C, Dixon R, Willars J, Power M, et al. Taking the heat or taking the temperature? A qualitative study of a large-scale exercise in seeking to measure for improvement, not blame. Soc Sci Med 2018;198:157-64. https://doi.org/10.1016/j.socscimed.2017.12.033.
- Norman AC, Fritzen L, Andersson G. Pedagogical approaches to quality improvement coaching in healthcare: a Swedish case study of how improvement coaches approach learning in a contemporary healthcare system. Nordic J Stud Educ Policy 2015. https://doi.org/10.3402/nstep.v1.30178.
- Ball JE, Murrells T, Rafferty AM, Morrow E, Griffiths P. ‘Care left undone’ during nursing shifts: associations with workload and perceived quality of care. BMJ Qual Saf 2014;23:116-25. https://doi.org/10.1136/bmjqs-2012-001767.
- Barac R, Stein S, Bruce B, Barwick M. Scoping review of toolkits as a knowledge translation strategy in health. BMC Med Inform Decis Mak 2014;14. https://doi.org/10.1186/s12911-014-0121-7.
- Yamada J, Shorkey A, Barwick M, Widger K, Stevens BJ. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open 2015;5. https://doi.org/10.1136/bmjopen-2014-006808.
- Hammersley M. Action research: a contradiction in terms?. Oxford Rev Educ 2004;30:165-81. https://doi.org/10.1080/0305498042000215502.
- Funnell S, Rogers P. Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. Hoboken, NJ: Wiley; 2011.
- Davies E, Cleary PD. Hearing the patient’s voice? Factors affecting the use of patient survey data in quality improvement. Qual Saf Health Care 2005;14:428-32. https://doi.org/10.1136/qshc.2004.012955.
- Dixon-Woods M, Leslie M, Tarrant C, Bion J. Explaining Matching Michigan: an ethnographic study of a patient safety program. Implement Sci 2013;8. https://doi.org/10.1186/1748-5908-8-70.
- Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf 2015;24:228-38. https://doi.org/10.1136/bmjqs-2014-003627.
- Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015;350. https://doi.org/10.1136/bmj.h1258.
- Timmermans S, Tavory I. Theory construction in qualitative research: from grounded theory to abductive analysis. Soc Theory 2012;30:167-86. https://doi.org/10.1177/0735275112457914.
- Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol 2013;13. https://doi.org/10.1186/1471-2288-13-117.
- Miles M, Huberman M. Qualitative Data Analysis: An Expanded Sourcebook. London: SAGE Publications; 1994.
- Provost L, Murray S. The Health Care Data Guide: Learning from Data for Improvement. San Francisco, CA: Jossey-Bass; 2011.
- Thor J, Lundberg J, Ask J, Olsson J, Carli C, Härenstam KP, et al. Application of statistical process control in healthcare improvement: systematic review. Qual Saf Health Care 2007;16:387-99. https://doi.org/10.1136/qshc.2006.022194.
- Bate P, Robert G, Fulop N, Øvreteveit J, Dixon-Woods M. Perspectives on Context: A Selection of Essays Considering the Role of Context in Successful Quality Improvement. London: The Health Foundation; 2014.
- Poots AJ, Reed JE, Woodcock T, Bell D, Goldmann D. How to attribute causality in quality improvement: lessons from epidemiology. BMJ Qual Saf 2017;26:933-7. https://doi.org/10.1136/bmjqs-2017-006756.
- Rogers PJ. Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation 2008;14:29-48. https://doi.org/10.1177/1356389007084674.
- Hawe P. Lessons from complex interventions to improve health. Annu Rev Public Health 2015;36:307-23. https://doi.org/10.1146/annurev-publhealth-031912-114421.
- Ling T. Evaluating complex and unfolding interventions in real time. Evaluation 2012;18:79-91. https://doi.org/10.1177/1356389011429629.
- Harvey G, Lynch E. Enabling continuous quality improvement in practice: the role and contribution of facilitation. Front Public Health 2017;5. https://doi.org/10.3389/fpubh.2017.00027.
- Pfadenhauer LM, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, et al. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci 2017;12. https://doi.org/10.1186/s13012-017-0552-5.
- Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care 2003;41:I30-8. https://doi.org/10.1097/00005650-200301001-00004.
- Anhang Price R, Elliott MN, Zaslavsky AM, Hays RD, Lehrman WG, Rybowski L, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev 2014;71:522-54. https://doi.org/10.1177/1077558714541480.
- Festinger L. A theory of cognitive dissonance. Redwood City, CA: Stanford University Press; 1962.
- Lawton R, Taylor N, Clay-Williams R, Braithwaite J. Positive deviance: a different approach to achieving patient safety. BMJ Qual Saf 2014;23:880-3. https://doi.org/10.1136/bmjqs-2014-003115.
- Kelly N, Blake S, Plunkett A. Learning from excellence in healthcare: a new approach to incident reporting. Arch Dis Child 2016;101:788-91. https://doi.org/10.1136/archdischild-2015-310021.
- Rycroft-Malone J. The PARIHS framework – a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual 2004;19:297-304. https://doi.org/10.1097/00001786-200410000-00002.
Appendix 1 Surveys
Reference number | Source | Type | Level of applicability | Evidence for validity | Timing of feedback collection | Modes of feedback collection | Requirement | Supported by hospital system | Timeliness of feedback | Regularity of feedback | Who initiates feedback? | Who provides feedback? | Role in QI |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28–30 | NHS Adult Inpatient Survey | Quantitative (+ comments) | Hospital | Validated | Post discharge | Survey (paper) | Mandated | Yes | Delayed | Annual or biannual | Service provider (at any level) | Patient | Data |
31,32 | NHS A&E survey | Quantitative (+ comments) | Service or specialty | Validated | Post discharge | Survey (paper) | Mandated | Yes | Delayed | Annual or biannual | Service provider (at any level) | Patient | Data |
33 | NHS Maternity Services survey | Quantitative (+ comments) | Service or specialty | Validated | Post discharge | Survey (paper) | Mandated | Yes | Delayed | Annual or biannual | Service provider (at any level) | Patient | Data |
34 | Scottish Inpatient Patient Experience Survey | Quantitative (+ comments) | Hospital | Validated | Post discharge | Survey (combination) | Mandated | Yes | Delayed | Annual or biannual | Service provider (at any level) | Patient | Data |
35 | Scottish Maternity Care Survey | Quantitative (+ comments) | Service or specialty | Validated | Post discharge | Survey (combination) | Mandated | Yes | Delayed | Annual or biannual | Service provider (at any level) | Patient | Data |
36 | Inpatient Patient Experience Survey 2014 (Northern Ireland) | Quantitative (+ comments) | Hospital | Validated | Post discharge | Survey (paper) | Mandated | Yes | Delayed | Ad hoc (run once in 2014) | Service provider (at any level) | Patient | Data |
37 | Your NHS Wales Experience Questionnaire | Quantitative (+ comments) | Either | Validation ongoing | Either | Survey (combination) | Not mandated (strongly recommended by NHS Wales) | Yes | Delayed | Not specified | Service provider (at any level) | Patient | Data |
38 | Picker Patient Experience Questionnaire (PPE-15) | Quantitative | Either | Validated | Post discharge | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
39 |
Patient Experience Questionnaire (New models study) |
Quantitative (+ comments) | Service or specialty | Not validated | Either | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
40 | Oxford Patient Involvement & Experience Scale | Quantitative | Either | Validated | Either | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
42 | Newcastle Satisfaction with Nursing Scale | Quantitative | Either | Validated | Either | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
43 | ICE Questionnaire | Quantitative | Service or specialty | Validated | Either | Survey (presume paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
44 | Patient Evaluation of Emotional Care during Hospitalisation | Quantitative (+ comments) | Service or specialty | Validated | In situ | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
45 | Urgent Care System Questionnaire | Quantitative | Service or specialty | Validated | Post discharge | Survey (telephone) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
46 | Patient Career Diary | Quantitative (+ comments) | Either | Validated | Either | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
47 | VOICE survey | Quantitative (+ comments) | Either | Validated | In situ | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Patient | Data |
48 | Hospital care and discharge: patients’ and carers’ opinions | Quantitative (+ comments) | Hospital | Not validated | Post discharge | Survey (paper) | Voluntary | No | Delayed | Ad hoc | Service provider (at any level) | Either | Data |
Appendix 2 Patient-initiated feedback processes
Reference number | Source | Type | Level of applicability | Evidence for validity | Timing of feedback collection | Modes of feedback collection | Requirement | Supported by hospital system | Timeliness of feedback | Regularity of feedback | Who initiates feedback? | Who provides feedback? | Role in QI |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
49–52 | PALSs | Qualitative | Any | N/A | Either | Internal hospital forms (web/paper) | Mandatory | Yes | Real time | Ad hoc | Patients/carers | Either | Data |
20 | Feedback cards (e.g. Points of You) | Qualitative | Any | N/A | Either | Internal hospital forms (web/paper) | Voluntary | Yes | Real time | Ad hoc | Patients/carers | Either | Data |
54–57 | Formal complaints | Qualitative | Any | N/A | Either | Internal hospital forms (web/paper) | Mandatory | Yes | Real time | Ad hoc | Patients/carers | Either | Data |
20 | Informal feedback (e.g. compliments) | Qualitative | Any | N/A | Either | Internal hospital forms (web/paper)a thank-you cards | Voluntary | No | Real time | Ad hoc | Patients/carers | Either | Data |
58 | NHS Choices | Qualitative (+ rating stars) | Any | N/A | Either | External (web) | Mandatory formal support | Yes | Real time | Ad hoc | Patients/carers | Either | Data |
53 | Care Opinion (was Patient Opinion at time of search) | Qualitative | Any | N/A | Either | External (web) | Voluntary | Both | Real time | Ad hoc | Patients/carers | Either | Data |
59 | iWantGreatCare | Qualitative (+ rating stars) | Any | N/A | Either | External (web) | Voluntary | No | Real time | Ad hoc | Patients/carers | Either | Data |
60 | Mumsnet | Qualitative | Any | N/A | Either | External (web) | Voluntary | No | Real time | Ad hoc | Patients/carers | Either | Data |
General knowledge | Qualitative | Any | N/A | Either | External (web) | Voluntary | No | Real time | Ad hoc | Patients/carers | Either | Data | |
General knowledge | Google reviews of hospitals | Quantitative (+ comments) | Any | N/A | Either | External (web) | Voluntary | No | Real time | Ad hoc | Patients/carers | Either | Data |
General knowledge | Facebook set up by ward/hospital | Qualitative | Any | N/A | Either | External (web) | Voluntary | Yes | Real time | Ad hoc | Patients/carers | Either | Data |
General knowledge | Facebook (general) | Qualitative | Any | N/A | Either | External (web) | Voluntary | No | Real time | Ad hoc | Patients/carers | Either | Data |
Appendix 3 Feedback and improvement frameworks
Reference number | Source | Type | Level of applicability | Evidence for validity | Timing of feedback collection | Modes of feedback collection | Requirement | Supported by hospital system | Timeliness of feedback | Regularity of feedback | Who initiates feedback? | Who provides feedback? | Role in QI |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
61 | Emotional touchpoints | Qualitative | Service or specialty | N/A | Either | Qualitative research methods | Voluntary | No | Delayed | Ad hoc | Service provider | Either | Data |
9 | Discovery interview/collecting patient stories | Qualitative | Service or specialty | N/A | Either | Qualitative research methods | Voluntary | No | Delayed | Ad hoc | Service provider | Either | Data |
62 | Kinda Magic approach | Mixed | Service or specialty | N/A | In situ | Qualitative research methods | Voluntary | Yes (Peninsula Trust) | Delayed | Missing | Service provider | Patient | Data + QI |
63 | Patient stories of care experience (EBCD and aEBCD) | Qualitative | Service or specialty | N/A | Either | Qualitative research methods | Voluntary | No | Delayed | Ad hoc | Service provider | Either | Data + QI |
64 | Patient journey (Action research Baron) | Qualitative | Service or specialty | N/A | Either | Qualitative research methods | Voluntary | No | Delayed | Ad hoc | Service provider | Patient | Data + QI |
1 | Always Events | Qualitative | Service or specialty | N/A | In situ | Qualitative research methods | Voluntary | Promoted by NHSEa | Delayed | Ad hoc | Service provider | Either | Data + QI |
65 | Fifteen Steps challenge | Qualitative | Service or specialty | N/A | In situ | Qualitative research methods | Voluntary | Promoted by NHSEa | Delayed | Ad hoc | Service provider | Observer | Data + QI |
Appendix 4 Other types of feedback
Reference number | Source | Type | Level of applicability | Evidence for validity | Timing of feedback collection | Modes of feedback collection | Requirement | Supported by hospital system | Timeliness of feedback | Regularity of feedback | Who initiates feedback? | Who provides feedback? | Role in QI |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
41 | HowRwe (How are we doing) | Quantitative (+ comments) | Service or specialty | Validation ongoing | In situ | Survey (combination) | Voluntary | No | Real time | Continuous | Service provider (at any level) | Patient | Data |
75 | FFT | Quantitative (+ comments) | Either | Not validated | Either | Survey (combination) | Mandatory | Yes | Real time | Continuous | Service provider (at any level) | Patient | Data |
List of abbreviations
- A&E
- accident and emergency
- aEBCD
- accelerated experience-based co-design
- AR
- action research
- CQC
- Care Quality Commission
- EBCD
- experience-based co-design
- FFT
- Friends and Family Test
- HCP
- health-care professional
- ICE
- Intensive Care Experience
- IHI
- Institute for Healthcare Improvement
- KMb
- knowledge mobilisation
- NIHR
- National Institute for Health Research
- PALS
- Patient Advice and Liaison Service
- PDSA
- plan–do–study–act
- PE
- patient experience
- PET
- Patient Experience Toolkit
- PI
- principal investigator
- PPI
- patient and public involvement
- QI
- quality improvement
- SPC
- statistical process control
Notes
-
Detailed description of iterations
Supplementary material can be found on the NIHR Journals Library report project page (www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415632/#/documentation).
Supplementary material has been provided by the authors to support the report and any files provided at submission will have been seen by peer reviewers, but not extensively reviewed. Any supplementary material provided at a later stage in the process may not have been peer reviewed.