Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 14/156/08. The contractual start date was in February 2016. The final report began editorial review in June 2018 and was accepted for publication in February 2019. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
none
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2019. This work was produced by Donetto et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
2019 Queen’s Printer and Controller of HMSO
Chapter 1 Introduction and context
Patients’ and carers’ experiences of hospital care are an important aspect of health-care quality. Together with patient safety and clinical outcomes, they are considered one of the three fundamental elements contributing to quality of care, so much so that in 2013 NHS England appointed a director of patient experience (now director for experience, participation and equalities) and in 2018 NHS Improvement published a Patient Experience Improvement Framework1 to support NHS trusts to optimise their performance in relation to Care Quality Commission (CQC) standards. NHS trusts develop their formal strategies to monitor and improve experiences of care and collect data about patients’ experiences using a wide range of methods. These methods include detailed postal questionnaires (e.g. the national Adult Inpatient Survey2), much smaller sets of satisfaction-type questions [including the nationally required ‘Friends and Family Test’ (FFT)], patients giving formal and informal complaints and compliments, hand-held devices given to patients on wards to provide ‘real-time’ data, patient stories recorded through face-to-face interviews and feedback posted on patient/public websites (such as NHS Choices and Care Opinion). Such data are collected so that they can be used to fulfil a wide range of functions. These include identifying local quality improvement (QI) priorities, allowing organisations to benchmark their performance against that of their peers, communicating the results to the general public as part of wider engagement and transparency efforts and informing internal and external quality inspections and regulatory processes.
Previously published studies have examined the types of patient experience data currently in use in the NHS, the systems (or lack thereof) through which these data inform QI specifically, and the initiatives that are implemented as a result. 3–7 In 2012, a survey highlighted the main methods used to collect patient experience data, the frequency with which these data were collected, and the mechanisms through which they were reported and made available to the public. 8 Furthermore, an evidence scan in 2013 reviewed existing approaches to measuring patient experience and their relevance to person-centred care,9 and essential aspects of care experiences indicative of good care can be found in national clinical guidelines. 10
Developing and testing interventions
Research has highlighted that, despite the vast number of data that are collected about patients’ experiences, it is not clear if, and how, NHS organisations use these data to identify and implement improvements in health-care quality. In response to the widespread acknowledgement of the relative lack of improvement deriving from eliciting patient feedback on a large scale, recent years have seen the emergence of several initiatives to develop and test various ward- and service-level interventions in the NHS in terms of their impact on quality (encompassing patient experience as well as patient safety). 11,12 For example, feasibility testing, including a process evaluation, of the Patient Reporting and Action for a Safe Environment (PRASE)11 intervention found that ‘ward staff were positive about the use of patient feedback for service improvement and were able to use the feedback as a basis for action planning, although engagement with the process was variable’. 11 The researchers suggested that the ‘value of collecting evermore data is questionable without a change to the conditions under which staff find it difficult to respond’ to these data. 12 In the USA, Grob et al. 13 have developed and piloted a prototype method for rigorously eliciting narratives about patients’ experiences of clinical care that may be useful for both public reporting and QI. 13 A further applied research example is the ongoing Patient Experience and Reflective Learning (PEARL) study,14 which is seeking to promote reflective learning that links patient experience as directly and immediately as possible to both group and individual performance; the researchers are seeking to develop a theory-based framework of workplace-based reflective learning tools and processes, with the potential to be incorporated into national training programmes.
Despite the growing interest in studying and improving ways of gathering and using patient experience data, the existing evidence base has not often addressed questions around issues of responsibility and accountability for their collection and use. For example, to date, only anecdotal evidence exists as to the allocation of responsibilities for the patient experience strategy at the hospital level. 3,4 Furthermore, differences in strategic approaches to the use of patient experience data have not been examined, nor have the specific mechanisms through which such data translate (or not) into successful quality improvements. This is particularly problematic in view of the growing policy emphasis on the importance of improving patient experience as a core dimension of overall care quality.
In view of the evidence mentioned above, we have not, as stated in our study protocol, undertaken a systematic or scoping review of the literature. Related studies simultaneously commissioned by the National Institute for Health Research (NIHR) Health Services and Delivery Research (HSDR) programme are conducting scoping or systematic reviews to examine what methods are potentially available to elicit patient experience data in the UK,15 to find out what is known regarding online feedback from patients,16 and to identify, describe and classify approaches to collecting and using patient experience data to improve inpatient mental health services in England. 17 However, to help contextualise the policy and practice implications arising from our study findings (see Conclusions and implications for policy, practice and research), and with regard to situating our study of the relationship between patient experience data and QI in relation to the contemporary literature, we highlight below three particular recent contributions: a viewpoint piece by Flott et al. ,18 a 5-year programme of research by Burt et al. 19 and an empirical study by Graham et al. 20 Each of these acknowledges that patient surveys, in multiple forms, continue to dominate the patient experience data landscape. We also discuss the different perspectives found in the work, for example of Ziewitz21 and Pflueger,22 to provide a fuller picture of the scholarly work relevant to the analysis of the impact of patient experience data. Finally, we offer a brief overview of the theoretical lenses and tools that inform our work, which we discuss further in Chapter 4.
Contemporary studies of the collection and use of patient experience data
Flott et al. 18 suggest eight ‘key challenges’ contributing to a disconnect between the collection and the use of patient survey feedback. As is common in the contemporary literature, these authors’ work focuses largely on the structural and technical features of the collection systems. 18 We summarise the eight ‘key challenges’ in Box 1.
-
Staff may show scepticism towards data, especially if they do not understand the conditions under which they were generated.
-
Clinical and managerial staff may not be trained in social research methods and so lack the skills to understand data.
-
Data garnered from large surveys can be complex to understand.
-
Patient feedback in national surveys are presented as aggregate data, which hinders local ownership.
-
Patient experience data are not often linked to other relevant data or other indicators of quality.
-
Different feedback mechanisms can lead to different messages about the quality of care; these disparate data are not often reconciled.
-
Guidance documents for national surveys are complex and require a high level of technical expertise within providers.
-
Providers may receive some or no external support to make sense of survey data and how to translate these into improvement; there is no consistency of support across providers.
In the light of these challenges, the authors propose how to enhance the use of patient-reported feedback. They advocate for methodological expertise and training (similar recommendations have been made in the specific contexts of both patient-reported experience measures23 and patient-reported outcome measures),24 the trialling of more dynamic collection methods (which the authors suggest does not necessarily mean new vehicles for collection, but rather enhancements to existing collections) and innovations applied to sampling. The authors note that ‘the underuse of patient-reported feedback is well documented’,18 but they argue that the complexity underlying this underuse had previously not been adequately illustrated.
Although our research focuses on acute NHS trusts only, many of the issues highlighted by Burt et al. 19 in their study of the measurement and improvement of patient experience in the primary care sector in England remain relevant. Among their conclusions was that large-scale postal surveys are likely to remain the dominant approach for gathering patient feedback in primary care for the time being, although a range of other methods are being developed [including real-time feedback, focus groups, online feedback, analyses of complaints, patient participation groups and social media]. In terms of using patient experience data for QI, the authors found that ‘broadly, staff in different primary care settings neither believed nor trusted patient surveys’19 for a range of reasons (doubts centred on the validity and reliability of surveys and about the probable representativeness of those who responded, as well as the fact that some practices had negative experiences, with pay linked to survey scores). In short, staff viewed surveys as necessary but not sufficient, and there were clear preferences for more qualitative feedback to supplement survey scores as this provided more actionable data on which to mount QI initiatives.
Following on from Burt et al. ’s19 research programme (which also included an exploratory trial of real-time feedback in a primary care setting), Graham et al. 20 conducted a study to develop and validate a survey of compassionate care for use in near real-time on elderly-care wards and accident and emergency (A&E) departments, as well as evaluating the effectiveness of the real-time feedback approach to improving relational aspects of care. 20 The study found a small but statistically significant improvement in relational aspects of care and that staff implemented a variety of improvements to enhance communication with patients. The authors made a series of recommendations for future research (summarised in Box 2) that relate not only to several of the other studies cited in this section but also to the findings and some of the implications of our own study detailed in Chapter 9.
-
Explore the impact of different styles of reporting formats, including graphics, as there is a need to find an optimal reporting format.
-
Understand the best channels for dissemination of results within trusts.
-
Understand the roles volunteers play in various activities at NHS trusts.
-
Understand the full costs and benefits of the approach to hospitals and the NHS.
-
Explore the acceptability of the survey instrument designed to measure relational aspects of care in different hospital settings and with other patient populations.
-
Consider whether or not existing instruments already used within the NHS could be adapted to include a greater focus on relational aspects of care and for use with a near real-time feedback approach.
We highlight the three contributions above at this early stage in our report as they provide an important counterpoint to the approach we have taken in our study. Although increasing attention to the most effective ways of collecting and using patient experience data to improve local services is to be welcomed, there is a tendency in the literature to reinforce points already acknowledged (e.g. regarding the limitations of surveys) or to make fairly unspecific calls for ‘more and better data’. The starting point for our study is, therefore, that a theoretically grounded and more nuanced perspective on what patient experience data are, what they do and what they might do may be of potentially greater benefit.
Different perspectives
We agree with Flott et al. ’s18 call for a better understanding of ‘how patients can facilitate the uptake of these methods (to make feedback more useful) and inform other, more innovative solutions to using their feedback’18. However, we are struck by the primarily technical nature of both the challenges and the proposals that are often suggested for addressing them. As we have written elsewhere, reflecting on the existing literature:3,18,22
. . . social science research has shown a striking lack of interest in critically reflecting on broader issues related to the nature of data, i.e. on the very value of collecting patient experience feedback in the first place. Indeed, there has been little investigation of the ontological reality of data . . .
Desai et al. 25
In parallel with more interventionist research studies (see Developing and testing interventions), there has also been a recent interest in making greater theoretical contributions to ‘patient experience’ as an important aspect of health-care QI work. For example, Ziewitz21 undertook an ethnographic study that is particularly relevant to our own research, as it asked ‘what does it take to mobilise experiences of care and make them useful for improving services?’21 His fieldwork focused on a UK-based patient feedback website (Care Opinion: www.care.opinion.org.uk; accessed 19 September 2019) and sought to ‘develop a critical perspective on patient experience as a contingent accomplishment and a focal point for eliciting, provoking, and respecifying relations of accountability’. He21 found that:
. . . capturing the patient experience is not so much a matter of accurate reporting . . . but rather an exercise in testing versions of reality through the ongoing respecification of objects, audiences, and identities.
Ziewitz21
Renedo et al. 26 have also used ethnographic fieldwork applied to patient-involvement initiatives in England to explore how health-care professionals articulate the relationship between patient experience and ‘evidence’, arguing that they create hybrid forms of knowledge that ‘help professionals to respond to workplace pressures by abstracting experiences from patients’ biographies, instrumentalising experiences and privileging “disembodied” forms of involvement’. 26
Pflueger,22 drawing on an empirical study in the NHS, highlighted how accounting (‘standardized measurement, public reporting, performance evaluation and managerial control’22) has become increasingly central to efforts to improve the quality of health care. However, he highlights the flaws in the common conceptualisation of the ‘problem of accounting as a matter primarily of uncovering or capturing information’22 and explains how this mistaken assumption can:
. . . produce systems of measurement and management that generate less rather than more information about quality, that provide representations of quality which are oriented away from the reality of practice on the front line, and that create an illusion of control while producing areas of unknowability.
Pflueger22
Pflueger’s22 central argument is that, although existing practices of accounting for quality have a variety of dysfunctional and even counter-productive effects, accounting is often ineffective, not because it is inherently incomparable with quality and the complexities of health care, but because its underlying characteristics have not been fully acknowledged or understood. He argues that there are three ways in which the ‘role of accounting for QI might be reimagined on more theoretically and empirically sound terms which is likely to provide the greatest potential for producing improvements’:22 (1) by being cautious towards the ‘ever more centralized, standardized and unified measures of quality’ by advocating the cultivation of ‘new, messy, overlapping, and always incomplete representations of quality’; (2) by carefully examining and evaluating ‘the sorts of activities, actions, behaviours, and consequences that result from knowing about quality through a particular style or set of concerns’ (i.e. the evaluation of accounting regimes in terms of what they do to practice); and (3) by thinking of quality management less in terms of ‘operating on the basis of numbers’ and more in terms of ‘operating around numbers, and using them to show not what is known but the boundaries of the unknown’. 22 Pflueger acknowledges that the new sorts of accounting systems he envisages will not create ‘illusions of certainty, accountability and control’, rather, they will highlight the limitations of all of these things. 22
Related to Pflueger’s work, two very recent studies27,28 start from the premise that ‘formal metrics for monitoring the quality and safety of healthcare have a valuable role, but may not, by themselves, yield full insight into the range of fallibilities in organizations’. These studies refer to ‘soft intelligence’, defined as the:
. . . processes and behaviours associated with seeking and interpreting soft data – of the kind that evade easy capture, straightforward classification and simple quantification – to produce forms of knowledge that can provide the basis for intervention.
Martin et al. 27
One of the two studies is based on interviews with senior leaders, including managers and clinicians, involved in health-care quality and safety in the NHS and illustrates how participants valued a softer form of data but struggled to access these and turn them into a useful form of ‘knowing’. 27 More specifically, the approaches participants used in the act of systematising the collection and interpretation of soft data (e.g. aggregation, triangulation) risked replicating the limitations of ‘hard’, quantitative data. As an alternative, the authors27 highlight the potential benefits, as do Renedo et al. 26 and Pflueger22, of seeking out and hearing multiple voices and how this is consistent with conceptual frameworks of organisational sense-making and dialogical understandings of knowledge. The second recent study28 was also based on interviews but involved personnel from three academic hospitals in two countries, and participants from a wide range of occupational and professional backgrounds, including senior leaders and those from the sharp end of care. 28 The authors found that the ‘legal and bureaucratic considerations that govern formal channels for the voicing of concerns may, perversely, inhibit staff from speaking up’28 and argued that those responsible for quality and safety should consider ‘complementing formal mechanisms with alternative, informal opportunities for listening to concerns’. 28
Actor–network theory
To bring similarly critical but different sensibilities to our own study, we adopted a methodological approach grounded in actor–network theory (ANT). 29–33 In doing so, we drew on the broader landscape of sociomaterial approaches to the study of organisational processes and health-care practices. 34–42 Although not a unitary theory, ANT provides a framework and tools that allowed us to pay attention to the ‘materiality’ of organisational activity and the inseparability of the technical and the social aspects in organisational practices. 43–45 In a recent paper,25 we have argued that research approaches to the study of patient experience data informed by ANT have the potential to make at least two contributions to current debates on health-care improvement. First, ANT perspectives and sensibilities emphasise the enacted nature of patient experience data and quality improvement:
. . . bringing to the fore the ways in which quality improvement emerges, or fails to emerge, as a result of a contingent series of interactions between various human (individual, institutional) and non-human actors (bureaucratic documents, policies, technologies, targets, etc.).
Desai et al. 25
Second, ANT sheds light on useful dimensions of organisational structure and functioning that might otherwise be overshadowed by more traditional perspectives that see data ‘as inert, open to infinite technical refinement in the service of quality improvement’25 (see Contemporary studies of the collection and use of patient experience data). When data travel in an organisation, transforming and translating as they travel (into reports, narratives, interventions, etc.), ‘it makes and reveals alternative organizational relations to those which are officially recognized’. 25 In the case of hospitals, which are formally hierarchical institutions with specific configurations of roles and responsibilities, paying attention to the interactions in which data become embedded:25,46,47
. . . may reveal alternative decision-making processes and may bring to the fore the role of certain actors (such as health care assistants or receptionists) who are conventionally marginal, but who nevertheless often come to play an unexpectedly central role in ensuring the quality of care.
Desai et al. 25
The flattened perspective afforded by ANT approaches, ‘which treats actors as equally important regardless of their assumed place in an institution,’ at the same time requires and ensures ‘that better attention is paid to alternative organizational arrangements as well as to forms of agency which would otherwise go undetected, including non-human agency’. 25
A widely recognised discrepancy exists between the proliferation of forms of patient experience data collection and the limited ways in which such data are used to inform QI. Our study is predicated on the basis that ANT allows us to shine a light on the interactions and negotiations between actors, keeping ‘the messy, everyday mechanics of improvement centre stage’. 25 Recent research has highlighted the value of paying attention to the role of artefacts, such as dashboards and real-time feedback devices (see sections Developing and testing interventions and Contemporary studies of the collection and use of patient experience data), in supporting the organisation of health-care work, with particular reference to QI. 48,49 Drawing on ANT and sociomaterial approaches to health-care practices, our study contributes to emerging understandings of the infrastructural context of QI work. 34 We describe in more detail how we drew on ANT and applied it to our questions relating to the collection and use of patient experience data for QI in Chapters 3 and 4.
Our study seeks to add to the existing scholarship and ongoing studies summarised above by examining in detail selected examples of current strategies and practices in English acute NHS trusts relating to the collection and use of patient experience data through the lens of ANT. By adding the further dimensions we described to the existing evidence on current strategies, our study moves beyond well-known ‘challenges’ (e.g. see Flott et al. 18 above) and generates a strong evidence base with clear implications for how to optimise organisational strategies and practices, including the roles and responsibilities of nurses, in the collection, validation and use of patient experience data.
Structure of the report
Having set the background and context for our study, including the theoretical lenses we chose to adopt, in Chapters 2 and 3 we describe the aims and objectives of the study and detail the study design (explaining any changes to the original research protocol) and the analytical processes that we used to make sense of our data. We then present three findings chapters.
In Chapter 5 we describe what counts as patient experience data in the participating NHS trusts and begin to explore the similarities and differences between practices at different trusts, the multiple nature of any given type of data, the extent to which some data practices appear to be more regular than others, and the variation we observed in the organisation of labour around patient experience work at different study sites.
In Chapter 6, we illustrate three essential points around what patient experience data do and how they do it: (1) how data can ‘act’ in different ways owing to their multiple forms, and how social actors develop strategies to compensate for flaws in data; (2) the key qualities that characterise interactions between social actors (be they people or organisational processes/external entities) and data when these data can clearly be seen to lead to care improvements; and (3) how the interaction between various human and non-human actors with standardised forms of data [e.g. the National Cancer Patient Experience Survey (NCPES)] produce unexpected outcomes that make the data more or less able to lead to quality improvement.
In Chapter 7, we take a step back from our direct observations of data practices in NHS trusts and illustrate the ways in which patient experience data work was re-examined and discussed in the light of our preliminary findings by representatives of participating trusts and policy-makers during our Joint Interpretive Forums (JIFs).
In Chapter 8, we summarise the relevance of our findings in the context of patient experience and quality improvement practices in the NHS and of the academic literature on health-care quality improvement.
Finally, in Chapter 9, we distil the implications of our findings for health-care policy, practice and research.
As recommended by our study’s Advisory Group, this report presents findings in a way that highlights the themes that cut across individual trusts’ cultures and contexts. As a result, it does not include detailed descriptions of the five individual organisational realities and challenges. Our aim was to suspend our reflections on (the more familiar aspects of) hierarchical structures, power relations and organisational cultures and instead to focus on the interactions and associations of human and non-human actors around patient experience work. This is one of the key innovative aspects of our study.
Chapter 2 Research objectives
The main aim of the study was to explore and enhance the organisational strategies and practices through which patient experience data are collected, interpreted and translated into quality improvements in acute NHS hospital trusts. The secondary aim was to understand and optimise the involvement and responsibilities of nurses in senior managerial and front-line roles with respect to such data.
Our objectives were to:
-
Identify suitable case-study organisations for our in-depth qualitative fieldwork. This was based on a sampling frame that drew on the CQC’s reporting of the national adult inpatient survey results and took into account the findings from the HSDR study by Locock et al. 50
-
Carry out an ANT-informed study of the journeys of patient experience data situated within two clinical services (cancer and dementia care) in each of our case-study sites. This aimed to explore the origins of these data, what these data do and how they interact with human actors to ultimately influence, and/or translate into, quality improvements.
-
Distil generalisable principles that may facilitate the journeys of patient experience data to quality improvements in acute NHS hospitals.
-
Develop practical recommendations and actionable guidance for acute NHS hospital trusts that will optimise their use of patient experience data for quality improvement. This was carried out together with stakeholders from the case-study sites (including patients and carers), national policy-makers and representatives from patient organisations.
Chapter 3 Methodologies and changes to the protocol
The study was organised in two overlapping phases. Phase 1 (February 2016–January 2018) comprised the majority of the ethnographic fieldwork within the five participating NHS trusts. Phase 2 (January–May 2018) included the sense-making workshops modelled on JIFs51 that were held first in London in January 2018 (with representatives from all five trusts and policy-makers) and then at each of the individual trusts (February–May 2018).
Phase 1
The ethnographic fieldwork drew on ANT-inspired approaches for the study of organisational processes. We carried out interviews, observations and documentary analyses at five acute NHS hospital trusts in England over a 13-month period. To identify potential participating trusts, we constructed a sampling frame based on the following:
-
acute trusts’ scores on section 10 of the national inpatient experience 2014 survey as reported online by the CQC (see the CQC website2 for the latest survey results available)
-
preliminary findings from a national survey undertaken as part of another NIHR study led by Professor Louise Locock under the same funding call50
-
trusts’ characteristics, including geographical location, size and willingness to participate.
We reviewed questions 67–70 of section 10 of the survey (‘overall views and experiences’ section). These questions asked patients to indicate whether or not they felt that they were treated with respect and dignity; to rate their experience of care on a numeric scale; to report whether or not they had been asked about their experience of care; and to report whether or not they had been given information about how to complain.
We then grouped trusts according to whether they were performing ‘better than others’ on one or more dimensions of ‘overall views and experiences’, performing ‘about the same as others’ on all four dimensions of ‘overall views and experiences’ or performing ‘worse than others’ on one or more dimensions of ‘overall views and experiences’ (Figure 1).
We then excluded trusts that had already been approached by Professor Locock’s team to be recruited to the main part of her study, because participating in additional research would have constituted an excessive burden. When approaching trusts, we considered information from Professor Locock’s study about whether trusts were collecting patient experience data from patients with cancer and/or patients with dementia.
We originally aimed to recruit a total of four sites: two from the ‘performing about the same as others’ group and one from each of the ‘performing better than others’ and ‘performing worse than others’ groups. However, none of the five eligible trusts from the ‘performing worse than others’ group could be included in the final sample because two trusts declined to take part and three failed to respond. For the other two groups, we held preliminary meetings with five trusts that had expressed interest in our study in response to e-mail invitations. Following these meetings, at which all trusts showed considerable commitment, we decided to carry out fieldwork at all five sites to increase the theoretical generalisability of our findings for relatively little additional cost (NIHR approval to this change was obtained in August 2016). Our final sample included two trusts from the ‘performing better than others’ group and three trusts from the ‘performing about the same as others’ group for section 10 of the national inpatient experience survey. These trusts are located in different areas of England and represent a range of sizes (see Table 3).
Our observations and interviews in each of the five trusts focused on patient experience work in three broad areas: at the trust level, in cancer care and in dementia care. Three researchers carried out fieldwork at the five trusts, visiting them for meetings, informal conversations, observations and formal individual interviews (Table 1).
Trust | Visits (n) | Days of fieldwork | Number of interviews | Participants for each |
---|---|---|---|---|
A | 24 | 25 | 12 | 9 staff, 3 patients/carers and/or governors |
B | 9 | 20 | 12 | 10 staff, 2 patients/carers and/or governors |
C | 7 | 27.5 | 15 | 13 staff, 2 patients/carers and/or governors |
D | 11 | 24 | 11 | 10 staff, 1 patient/carer and/or governor |
E | 6 | 20 | 15 | 11 staff, 4 patients/carers and/or governors |
At the trust level, some of our fieldwork was focused on observing and talking to those with responsibility for patient experience data, such as members of patient experience teams (where these teams existed),as well as senior trust staff (e.g. heads of patient experience, directors of nursing) with responsibility for patient experience. We spent time in patient experience offices, observing the collection, processing, analysis and communication of feedback. We also observed patient experience committee meetings, quality committee meetings, trust board meetings, governors’ meetings on patient experience, meetings relating to complaints and trust-wide nursing meetings such as Matrons’ Forums. We also accompanied governors and/or trust directors and other senior staff on ‘walkarounds’ and observed ward accreditation or assessment processes.
As outlined in our protocol, we selected dementia care and cancer care services because their similarities (a high number of patients; inpatients admitted to different wards in the hospital; the crucial role of carers) as well as their differences (long-standing use of well-established formats for patient experience data in the area of cancer care, in contrast to the challenges of documenting patient experience in the context of care in the area of dementia) would allow us to draw useful comparisons.
Both cancer services and dementia services at the participating trusts did not constitute whole, discrete environments but rather a set of care and administrative practices distributed across wards, clinics, departments and divisions. However, at all trusts, cancer services existed as an administrative unit responsible for conducting audits and surveys and managing compliance with nationally mandated targets and quality standards, whereas dementia services did not. We grew to know cancer and dementia lead nurses and clinical nurse specialists (CNSs), as well as ward managers of wards to which cancer patients or patients with dementia were often admitted (e.g. surgical wards for the former; care of the elderly wards for the latter). We also observed cancer CNS team meetings, ward manager/senior sister meetings and dementia committee/steering group meetings. We also interviewed and talked to cancer managers and administrative staff from ‘cancer services’ and observed their meetings at which patient experience data were discussed.
Although our study was largely focused on trust staff and organisational practices, we also interviewed patients, carers and former patients and observed trust-sponsored meetings of patients, carers or former patients at which patient experience data were presented and discussed. Our observations and conversations with key informants at the trusts aimed to clarify what types of patient experience data were being generated and used within the organisations. We considered a variety of artefacts conveying patient experience information, paying attention to the interactive contexts (e.g. meetings, conversations, reports) in which they were deployed. At all trusts, we paid significant attention to the modes of generation, processing, interpretation and use of the FFT, CQC national Adult Inpatient Survey2 and the NCPES data as these were the formats that offered opportunities for comparison across all trusts in our study. We also paid particular attention to data formats that were specific to an individual trust or had different prominence in different organisations, prioritising the links they provided to QI work within a particular organisation. The researchers took detailed handwritten notes during observations and interviews, which were then typed or written up into detailed field notes very soon after. We also viewed and/or collected documentary evidence, including steering group/committee meeting minutes and agendas, patient experience/quality strategies, patient experience executive ‘walkaround’ forms, patient-story booklets and board meeting minutes and agendas. For phase 1 we also carried out individual semistructured interviews with 65 participants. These formal interviews were audio-recorded and transcribed for analysis. A breakdown of the interviewees’ characteristics is presented in Table 1. We selected potential participants for one-to-one semistructured interviews on the basis of our observations and their involvement with patient experience data. Transcripts were anonymised at the point of transcription and assigned a composite numeric identifier.
Phase 2
In phase 2, we carried out a series of multistakeholder workshops in the format of a JIF: a type of group discussion aimed at encouraging ‘perspective taking’ and joint decision-making. 51 We organised six JIFs in total: one cross-site JIF in London (January 2018), which brought together key participants from each of the five study sites and policy-makers from NHS England and other organisations, and five local JIFs at each study site (February–May 2018), which a range of trust staff attended. Although in our original proposal we had envisaged the local workshops taking place prior to the cross-site JIF in London, in the course of the study we determined that it would be more fruitful to reverse the sequence and enable cross-site exchange and comparison prior to holding in-depth discussions of what learning could be extracted from our data analysis for each specific site. Our fieldwork at participating trusts highlighted that members of staff at all trusts were eager to learn about other participants and the detail of their patient experience data work. This led us to consider whether or not early sharing of thoughts and experiences across trusts would make for richer discussions during the local trust-based workshops, allowing for reflection on how different practices may or may not be implemented within local contexts and constraints.
Cross-site Joint Interpretive Forum in London
Our first workshop was held in London in January 2018 and was attended by representatives from all five trusts and five policy-makers as well as five members of the research team and a researcher colleague who took detailed notes of the workshop’s proceedings. This 4-hour workshop aimed to allow participants to share information about the patient experience practices at their trust, to enable discussion of the key issues in patient experience work (elicited with the aid of four deliberately provocative statements prepared by the research team on the basis of our emerging findings), and to provide a space for the presentation and discussion of preliminary findings from the study and their potential implications for policy and practice.
The cross-site JIF comprised four inter-related activities, organised around a structure carefully planned to maximise participation and dialogue. These activities were:
-
poster walk
-
provocations activity
-
presentation of emerging themes from our fieldwork
-
discussion of implications for the trust(s).
We detail each of these in turn.
Poster walk
An objective of the cross-site JIF was to provide an opportunity for study sites to learn about each other’s patient experience data practices and organisation. Until then, study sites had been kept anonymous from each other and trust staff were keen to learn the identities of participating trusts. We determined that asking each trust to present this information themselves at the workshop would be passive, time-consuming and subject to variation in quality and format. Rather, we wanted participants to be actively engaged for most of the workshop. Therefore, we decided to present key information about patient experience data at each of the trusts by way of five posters, one for each study site, which participants were asked to read and comment on using sticky notes. The notes allowed the research team and JIF participants to gauge what participants found interesting about each study site trust’s activities. The research team designed the posters, which were printed on A0-size paper. Each poster consisted of the same following four elements:
-
basic information about the trust – the trust name, a brief 50-word description of the trust, the number of staff and beds, the trust location on a map of England and CQC ratings (‘Overall’ and ‘Caring’ categories)
-
a description of the composition and reporting lines of the ‘patient experience team’
-
‘Talking Points’, which mentioned a noteworthy aspect of patient experience data work in each of three different areas: trust-wide, cancer and dementia (Table 2)
-
photographs of aspects of patient experience data work (e.g. FFT cards and visualisations).
Talking points | ||
---|---|---|
Trust-wide | Cancer care | Dementia care |
The trust is committed to using experience narratives in both written and filmed form to improve the quality of care via staff education programmes | CNSs have a fundamental role in implementing the action plans that stem from the results of the NCPES | The views and experiences of carers of patients living with dementia are documented by the carers’ survey administered on the wards or via telephone by volunteers |
New requirements from the CCG have enabled trust staff to transform how patient experience data are reported on and used | The lead cancer nurse is mobilising the NCPES results to expand the forums and types of people (e.g. medical staff, patients) involved in producing action plans | The trust relies on a committed key volunteer to collect feedback from carers of patients with dementia |
The trust has implemented initiatives to actively solicit feedback from patients and their families through PALS clinics. One effect has been to reduce the number open PALS enquiries and formal complaints | Tumour-specific CNS teams have partnered with patients in order to produce more tailored action in response to the results of the NCPES | Dementia-specific patient experience data collated by the trust is limited; a multiprofessional Dementia Steering Group relies on input from carer representatives, staff carers and the Alzheimer’s Society in developing quality initiatives to improve the experiences of people with dementia |
The main vehicle to ensure that patient feedback is acted on by front-line staff is a long-running ward assessment and accreditation system that is run by a dedicated senior nurse and actively involves executive and non-executive directors of the trust | CNSs embed understanding patient experience and acting on feedback in their working practices and professional development and often know how to use governance structures to promote discussion of the patient experience work that they do | Acting on patient and carer experience data happens locally at wards or services and best practice is showcased through the division-specific learning sessions and the ward accreditation scheme |
The trust has a sophisticated system for monitoring and learning from complaints that involves executive and non-executive directors through a standing committee of the board | The lead cancer nurse and her colleagues have used the NCPES to substantially improve the Macmillan Cancer Support and Information Centre | A dedicated ‘dementia team’ of nurses hold clinics and activities that enable them to know individual patients and carers. They rely less on formal feedback mechanisms |
The elements of the poster were produced from data collected from fieldwork. Posters are not included in the report to maintain participating trusts’ anonymity (see Appendix 5 for a ‘mock’ poster).
Provocations activity
The second activity consisted of small group discussions of four ‘provocations’. These were statements designed to spark debate among JIF participants. They were formulated on the basis of reflections on fieldwork by the research team during a meeting dedicated to the planning of JIFs, and the wording was subsequently refined by the researchers on the team to ensure maximum impact. The four provocations were as follows:
-
National surveys are used to benchmark rather than improve the quality of care.
-
The NHS would lose little if the FFT were abolished tomorrow.
-
It is easier to improve the experience of patients with cancer than that of patients with dementia.
-
Patient experience data do not need to be everybody’s business.
Participants were purposefully divided into four mixed groups, each of which contained members from different trusts and at least one policy-maker. These mixed groups were designed to encourage the sharing of knowledge and perspectives. Groups discussed each provocation for 10 minutes, with a short plenary session after each round in order to hear feedback from each group. Members of the research team assigned themselves to groups in order to pose questions and guide discussion if the group seemed to be straying from the statement under consideration. One purpose of the poster walk and provocations activities was to help participants engage with the study context and each other’s perspectives on patient experience before hearing from the research team directly about emerging themes from the fieldwork. This aimed to encourage active participation.
Presentation of emerging themes from fieldwork
The research team presented emerging themes from the fieldwork. The emerging themes were produced through discussions among the research team over several meetings in December 2017 and January 2018. We were committed to checking that our emerging research findings were relevant to trusts before drafting our report. The purpose of the presentation (and of the JIF as a whole) was to test whether or not the kinds of analysis and ideas we were producing, and which we considered valuable, would be of interest to study site trusts, and whether or not they would be able to act to refashion practice and policy on the basis of our project report. We provide more detail on the content of the presentation and the discussion around it in Chapter 7.
Discussion of implications for the trusts
Following our presentation, we asked participants to re-group with their own trust colleagues; policy-makers from NHS England, NHS Improvement and the Point of Care Foundation formed a group of their own. We asked participants to consider the implications of the presentation and workshop as a whole for their own trust or area of policy work. This was a 30-minute activity and each group was joined by a member of the research team. At the end of the activity, we called on groups to share their thoughts on implications in a plenary session. At the end of the event, the research team asked for oral feedback from participants on the JIF (e.g. what worked well, what could be improved) and for preliminary thoughts on whether or not similar activities would be suitable at each of the JIFs to be held at the study sites.
Local trust-based Joint Interpretive Forums
Following the London JIF, we discussed with each of our key contacts at the study sites the structure and content of JIFs to be held at each trust. Our contacts, who themselves had attended the London JIF, reported that they wanted to replicate the same structure and content for staff at their trust with certain local modifications: some of these were requested by trusts, others were the result of time constraints. All contacts were involved in planning the JIFs. Some trusts contributed additional statements for the provocation activity. The JIFs at trusts followed the same basic structure of four linked activities as outlined in the previous section.
Our key collaborators at each trust were central to organising the JIFs. At all trusts, they booked rooms, issued invitations to participants and organised catering. These collaborators, in varying degrees, also co-chaired the workshop, asked questions of participants, took notes and summarised proposals for change in trust practice. Participants for the JIF workshops were purposively selected, in collaboration with trust staff, on the basis of their role in the organisation, participation in earlier phases of the study and their willingness and availability to participate (see Table 6). The group discussions at the JIFs were captured in field notes taken immediately after the event and included in the study data set.
Patient and public involvement
Independent consultant and patient and public involvement (PPI) advisor Christine Chapman contributed to the development of the study design and the PPI strategy for the project. She offered detailed feedback on draft versions of both the outline and full application, in particular with regard to the language and style of the Plain English summary; the importance of online platforms and social media in the sharing of patient experiences of care; due attention to the role of patients’ carers; and the inclusion of patients and patient organisations in the dissemination strategy. A few months into the study, Christine Chapman became unable to continue in her role and was replaced by PPI advisor Sally Brearley. Throughout the study, Sally Brearley took part in team and Advisory Group meetings, reviewed and commented on early drafts of the final report, and provided particular input into the production of the Plain English summary and of the video animation summarising some of our key findings for non-academic audiences.
In designing the study, we had one-to-one conversations with two carers of people with dementia and an advisory meeting with two users of cancer services. Following these, we refined our fieldwork strategy to ensure that issues of mental capacity were duly taken into account and increased the time for patient advisor input (the PPI budget). Two patient and PPI advisors were members of the Study Advisory Group. They took part in Advisory Group meetings and contributed to steering the research process via these meetings. They also provided detailed feedback on draft versions of the video animation.
In phase 2 of the study, for the cross-site and trust-based JIFs, we aimed to involve patient and/or carer representatives. We consider participation in these meetings a PPI activity, in that preliminary study findings were discussed and recommendations for policy and practice were outlined. Dissemination activities have taken place at all participating trusts following study completion. Whenever possible, we have involved patient representatives in these events, especially those with a specific interest in patient experience work.
Ethics approval
Ethics approval was obtained in August 2016 from the London Bridge Research Ethics Committee (Integrated Research Application System identification number 18882). Health Research Authority approval was granted in October 2016 and individual trust research and development (R&D) approvals were secured between November 2016 and January 2017.
Changes to protocol
Versions 2 (July 2016) and 3 (December 2016) of the protocol reflected changes in the number and composition of study sites: the increase from four to five study sites, two of which were performing ‘better than’ others and three the ‘same as’ others according to section 10 of the CQC report (details of why we could not recruit a trust performing ‘worse than’ others were provided and approved by NIHR before updating the protocol). In version 4 (March 2017), the total number of planned interviews was amended to reflect the number indicated in our ethics application, which allowed for up to 16 interviews per trust rather than the 12 originally planned. Version 5 (March 2018) reflected the change in the sequence of JIFs, with the cross-site JIF in London taking place before the individual JIFs at the five participating sites.
Chapter 4 Modes of analysis
Text data included documents collected at the participating trusts (such as agendas and minutes from steering groups, committees, and board meetings; patient-story booklets; walkaround forms; and trust-specific questionnaires), transcripts of audio-recorded interviews and of dictated notes from interviews that were not recorded, field notes from informal conversations and observations. We also took photographs that we felt captured important aspects of patient experience data collection and processing and used these to support our analysis of text data. Our analysis was based on a combination of re-examination of our textual data (re-reading notes and transcripts, producing memos and reflective notes, and open coding, mostly manual) and discussion, in groups of different sizes, of observations and reflections on field visits. By this, we mean that the analysis proceeded through individual work as much as through discussion of emerging themes and ideas. Two of the researchers on the team (AD and GZ) carried out fieldwork at four of the five sites. They shared an office and, therefore, had the opportunity to share views and nascent analytical threads as these emerged. They had regular meetings (every 2–3 weeks depending on fieldwork commitments) with the researcher (Dr Mary Adams first and SD after her return from maternity leave) who carried out fieldwork at the fifth trust. These regular meetings allowed for discussion of the practices observed and the conversations had at the NHS trusts. In our analysis, we relied on a mix of trust documents, interview transcripts and field notes; our guiding principle was our commitment to grounding our analytical themes in the evidence available and the active search for disproving examples that weakened a particular line of argument. Larger meetings (1–1.5 days’ duration) with the whole research team were also held (five meetings between January 2017 and March 2018) to discuss emerging themes and potentially useful analytical and reporting approaches.
Our principal mode of analysis was informed by ANT, developed by Bruno Latour, Michel Callon and John Law as part of Science and Technology Studies during the 1980s, which has taken in several directions since it has been included in the study of health care and health organisations. 29–33 Although it carries ‘theory’ in its name, ANT is better understood as a range of methods for conducting research, which aims to describe the connections that link together humans and non-humans (e.g. objects, technologies, policies and ideas). In particular, ANT seeks to describe how these connections come to be formed, what holds them together and what they produce.
In ANT the systems of interaction and mutual influence between people and ‘things’, humans and non-humans, are called actor–networks. In an ANT framework, actors (e.g. nurses in charge of collecting patient experience data) are able to act and to generate effects in the world only through interacting with other entities (e.g. the questionnaires used to collect data, the organisational reporting lines they are required to follow, the targets that they need to meet). In addition, in an ANT framework, and following Latour’s terminology,52 we can see different types of data as ‘actants’, which is to say ‘entities that are endowed with the potential to produce change in, and in turn to be transformed by, the course of action of other actors’. 25
As a family of approaches rather than a unitary theory, ANT provides a framework and tools that allow us to pay attention to the ‘materiality’ of organisational activity and the inseparability of the technical and the social in organisational practices. 43–45 Working with ANT tools means subscribing to the ‘notion that everything that exists in the world is the outcome of an interaction between two or more entities (be they human and/or non-human)’ and, therefore, being interested in examining very closely the connections between humans and non-humans. 25 It is with this sensibility that we approached the analysis (as well as collection) of our data. In practice, ‘doing’ ANT-informed research means that analysis is not, and cannot, be separated from fieldwork; ANT does not recognise essentialisms. This is one of the challenges of thinking with ANT. However, from our experience, using a sensibility commensurate with ANT is fruitful for thinking about patient experience data in innovative ways that can still produce actionable guidance for health-care organisations.
Three interlinked ideas follow from this basic statement about the primacy of relations between actors (or actants) in ANT. These ideas ask us not to make certain assumptions in the collection and analysis of our data. The first is that, because everything is the outcome of a relation, including the form and characteristics that an entity takes, the distinction between humans and non-humans is less important than the qualities of entities that are produced through a particular interaction. Thus, in looking at research data, we have foregrounded the qualities of entities (whether or not they can act, whether or not they have effects, whether or not they structure organisational practices) that emerge through relations rather than assume that those qualities are inherent in people or things themselves.
The second idea is that if relations and interactions are key to how entities achieve their qualities, then an ANT analysis would need to recognise, describe and account for these interactions. In doing so, it should also pay equal attention to human and non-human actors as the latter can, and do, have similar qualities (as a product of interactions) to the former. As we mentioned earlier, in the case of patient experience, exploring the enactment of data means moving beyond analyses that see patient experience data as inert, stable entities, open to technical manipulation and refinement. It means approaching the research data by asking the following question: in what circumstances do patient experience data in relation to other actors make quality improvement possible? By noting, describing and analysing these interactions, we can understand how relations around patient experience data are continuously produced and to what effect. Paying attention to the enactment of patient experience data (and looking at qualities emerging in interactions rather than specific identities) also has the corollary of providing possible alternative visions of hospital organisation, for instance the role of actors who may be usually considered marginal but can be central in ensuring improvements in care.
Finally, the third idea that guided our mode of analysis is that we do not make assumptions about the presumed power, size and influence of actors. For example, we do not assume that patient experience data presented at a trust board meeting are ‘more important’ or ‘more authoritative’ than data presented at a ward sisters’ meeting; their qualities emerge through particular interactions. We conducted our analysis by adopting this ‘flattened’ perspective, which treats actors as equally important regardless of their assumed place in an institution. This is not to say that our analysis neglects issues of power in hospital organisation; rather, it examines how particular interactions produce power as a quality of entities, whether human or non-human.
In practice, these three ideas meant that we aimed to focus on identifying what people operating within the participating trusts identified as patient experience data, observing the forms these data took, if/how they moved, whether or not and how they changed form, and the interactions of which they were part (i.e. in which ways data recruited and were recruited into more or less stable relationships by human and non-human actors). By observing practices and having several conversations with key informants at the trusts, we built rich descriptions of the ways in which different elements of patient experience data work came together, showed identifiable patterns and/or moved across the organisation. In the ways discussed above (individual analysis and group discussions), we examined our descriptions for each trust and compared practices and patterns for the three main areas of focus (trust-wide patient experience data work, work within cancer care and the care for people living with dementia) both within each trust and across trusts. The findings emerging from these comparisons were used to structure and feed into the JIFs of phase 2 (sense-making phase) of the study. These took the forms of workshops and they constituted further data but they were also part of the analytical process as well as contributing to the local impact at participating organisations. Given their significance in the analytical thinking for this study, we describe them in detail in Chapter 7.
The analysis of the research data deriving from the JIFs was somewhat different. As several members attended the cross-site JIF, we took notes of the discussions taking place in the table groups as well as in the larger group. Local trust-based JIFs were facilitated by one researcher, which meant that notes were written down immediately after the event. Our plan was to analyse notes from all JIFs thematically, with the aim of distilling practical and generalisable principles for enhancing the use of patient experience data for quality improvement. However, as a result of the delays in R&D approvals at the beginning of the study and the reduction in team capacity owing to maternity leave, the local trust-based JIFs had to be held towards the very end of the study (February–May 2018). In view of this, we found it more useful and practically relevant to discuss our notes and impressions from the trust-based JIFs in face-to-face meetings (involving AD, SD and GR). Therefore, notes from the JIFs were analysed for themes, but no systematic coding was carried out.
Chapter 5 Findings 1: what counts as patient experience data and who deals with them?
In this section, we begin to present the findings from our data analysis. After a very brief description of the five participating trusts, we explore what counts as patient experience data and who works with them. We do so by looking at (1) the variety of entities that constitute patient experience data as well as the multiple ‘versions’ into which each type of data transform as it is collected, analysed, interpreted and used to guide practice; (2) the varying degrees of regularity that characterise the processes of collection, analysis, interpretation and use of data; and (3) the nature and composition of patient experience teams at the study sites and the different ways in which these teams (or the lack thereof) affected the practicalities of data work at the sites. The characterisation of data and data practices provided in this section will constitute the basis for exploring how data are interacted with in Chapter 6.
As detailed in Chapter 3, we carried out our fieldwork at five acute NHS hospital trusts with a specific focus on cancer and dementia care. We assigned each trust a letter (A, B, C, D and E) and gathered detailed information regarding how each organisation operated, what their internal structure looked like, what areas they served, their capacity, their staffing and their main organisational strengths and pressures. In this report, we deliberately refrain from providing too detailed a picture of each trust. This is because (1) we wish to maintain confidentiality wherever possible; (2) we want to keep our focus on the observation of practices and interactions; and (3) we follow our Advisory Group’s recommendation that we organise our report findings by theme rather than by site. In keeping with our ANT-informed approach, we draw attention to the actors and interactions at hand as we describe in detail the characteristics of these interactions, the actors’ transformations and the effects of both. In Table 3, we summarise some of the key features of the five trusts in our study. These provide a sense of the variation in the organisational arrangements encountered; therefore we do not discuss them in the text. Rather than building individual trust profiles, we draw attention to the similarities and differences of these trusts and the reasons why any of these similarities or differences might appear significant to us.
Feature | Trust | ||||
---|---|---|---|---|---|
A | B | C | D | E | |
Approximate number of beds and staff |
Beds: 750 Staff: 5000 |
Beds: 450 Staff: 4500 |
Beds: 1000 Staff: 7100 |
Beds: 950 Staff: 7000 |
Beds: 2300 Staff: 14,500 |
Foundation trust | Yes | Yes | No | Yes | Yes |
Is there a formally designated ‘patient experience’ team? |
Yes 2 patient experience facilitators 1 data entry analyst |
No PALS staff carry out some patient experience functions |
Yes 1 patient experience manager 1 complaints and PALS manager 6 patient experience staff members 4 PALS officers 2 formal complaints officers |
No A team of professionals at corporate level (i.e. lead nurse for corporate services, corporate matron, quality improvement team, assistant director of service user experience) overviews the integration of patient experience work in the trust |
Yes 1 head of patient experience 1 patient experience and involvement officer 1 information analyst 1 equality and diversity lead 1 patient relations service manager Several complaints resolution and investigation officers (plus complaints administrator and secretaries) Administrative and clerical support |
Board-level responsibility for patient experience | Director of nursing | Director of nursing | Director of nursing | Director of nursing | Nursing and patient services director |
Report to trust board | YesBoard meeting opens with a patient story and a patient experience standing item on (1) formal complaints (2) local patient survey and (3) FFT results | YesAs part of quality report (patient experience section written by head of PALS); chief nurse presents FFT results (without open comments) and a patient story/film/NHS Choices extract or audio clip is presented | YesBoard meeting opens with patient story Patient experience part of ‘Integrated Performance Report’ (which is informed by ‘Safety and Quality Committee’ and report – see below) | YesMonthly integrated performance dashboard has three dashboards dedicated to patient experience. In addition, 6-monthly patient experience report goes directly to the board | YesHead of patient experience presents a quarterly patient experience report to the board |
Links between patient experience and QI | Patient and staff experience committee reports to the quality assurance and learning committee (consider FFT, local survey, complaints) | Head of PALS reports to improvement programme manager, presents FFT data, comments data and complaints data to the patient quality committee, and writes the patient experience section of quality report | Patient experience and QI come together at quality governance and learning group (where the reports that each team produce are discussed to avoid misinterpretations before being collated into a single ‘Safety and Quality Committee’ report) | Through the quality and patient experience committee, ward accreditation process, and patient, family and carer experience steering group, also at divisional and clinical governance | Patient experience steering group (reports to clinical governance and quality committee) includes head of patient experience and director of quality and effectiveness. Patient are safety and quality review panels are chaired by the medical director |
As expected, in line with the existing literature on patient experience data discussed earlier, each of the trusts taking part in our study collected a vast number of patient experience data in various formats. Table 4 summaries the data formats we became aware of during our fieldwork (this is not intended to be an exhaustive list).
Trusts | Type of patient experience data collected |
---|---|
All trusts (mandatory) | FFT53 |
NCPES54 | |
Formal complaints | |
National Patient Experience Survey Programme2 (not strictly mandatory but relevant to CQC inspection) | |
All trusts | Informal complaints (e.g. ‘concerns’) |
Compliments (e.g. letters, thank-you cards, e-mails) | |
Data feeding into Cancer Services Peer Review | |
Patient stories (written and/or filmed and/or presented in person) | |
Local tumour-specific cancer patient experience surveys | |
National Dementia Audit Carers’ survey55 | |
Online feedback | |
Executive, non-executive or governor ‘walkarounds’ | |
Bereavement support feedback | |
Patient-Led Assessments of the Care Environment | |
Some trusts | Local carers’ survey (dementia) |
Trust bespoke inpatient survey |
In exploring the range of ‘feedback’ (as it is often referred to) that trusts collected and analysed, we began with data formats that were well established before focusing on specificities of the two clinical areas (cancer and dementia) we had selected. We looked at how non-area-specific data were collected, collated, processed and organised; in particular, we looked at the FFT for inpatients and at the National Inpatient Survey. As our examples in this chapter will illustrate, all five trusts invested a lot of time and resources to generate data aimed at providing a picture of patients’ experience of care; they also collected information on the staff experiences of providing care, but this was not a focus of our work. However, we found a great deal of variation in how data were generated and processed at different trusts, which applied to mandatory as well as non-mandatory data formats. In particular, how the FFT was enacted in practice in the five trusts proved an accessible example that we use to highlight this variation.
The Friends and Family Test
The FFT is mandatory for all NHS acute hospital trusts in England. Its adoption was announced in 2012 by the prime minister David Cameron; first implemented in 2013, it was then rolled out to all trusts in England in 2014. For most clinical services, the test is based on one essential question: ‘How likely are you to recommend our ward to friends and family if they needed similar care or treatment?’. The question can be answered on a five-point scale from ‘extremely likely’ to ‘extremely unlikely’ or by selecting ‘I don’t know’. There is also a free-text box for open comments. The questionnaire is intended to be anonymous; however, there is space for patients to write their name and contact details if they wish to be contacted about their feedback.
The FFT applied to all clinical areas in each trust (with varying degrees of relevance depending on the clinical area) and it was a particularly useful focus for our observations in at least two respects: (1) in four of the five trusts, significant resources and work went into ensuring that the data were generated, collated and analysed, and that the product of the analysis was reported in a variety of ways; (2) it exemplified clearly how some of the data we expected to identify as discrete entities, were, in fact, made up of a multiplicity of different entities interacting with a range of actors (both human and non-human). The first aspect of the FFT made it a productive starting point for our observations, and the second made it clearer to us how the ANT lenses would be analytically useful. Therefore, we spend a little time illustrating this particular example below. The point we make about how the FFT transforms into different entities and connects with multiple social actors can be extended to virtually all types of patient experience data we observed.
Collecting the Friends and Family Test
Although the FFT is a nationally mandated instrument, those behind its introduction anticipated that trusts would have a certain degree of autonomy as to how they deployed it. 53 We found a great deal of variation across, and within, the five trusts as to the form and method of collection, analysis and communication of the FFT. Trusts use a variety of methods to administer the FFT (paper, card, text message, online, kiosk) and to collect and analyse the information (trust staff, external contractors, data management software), and show variation in how and where such data are reported (Table 5).
Feature | Trust | ||||
---|---|---|---|---|---|
A | B | C | D | E | |
Average number of FFT responses per month (November 2016–January 2017) | 1980 | 450 | 2389 | 551 | 2361 |
Form | Paper (produced by the trust) | Mainly card (provided by external contractor); text message in the emergency department | Paper and online | Mainly text message | Card (provided by external contractor), kiosks, online and text messages |
Contractor | None | Picker Institute Europe, Oxford, UK | None | Healthcare Communications (Healthcare Communications Ltd, Macclesfield, UK; www.healthcare-communications.com/) | Quality Health (Quality Health Limited, Chesterfield, UK; www.quality-health.co.uk/ |
Management software | Meridian | Not known | Meridian | ENVOY (Healthcare Communications Ltd, Macclesfield, UK; www.healthcare-communications.com/solutions/envoy-messenger/) | Business Objects Launchpad (SAP Ltd. Feltham, UK; www.sap.com/uk/products/bi-platform.html) |
Discussed at board? | Yes | Yes | Yes | No | Yes |
Used for benchmarking? | Yes | Yes | Yes | No | Yes |
In addition to the main FFT question and the free-text box, each trust that used a paper-based questionnaire asked several supplementary questions. These explored aspects of care that are considered locally relevant to patient experience (e.g. trust in staff, worries and fears, involvement in the discharge process). Most also asked for a variety of demographic information about the patient (e.g. age, gender, ethnicity). This made the total number of questions about care (ward and demographic information aside) across the trusts range from two to nine.
Four of our five study trusts (all but trust D) principally used paper-based FFT questionnaires for data collection. Who distributed the forms varied across and within these four trusts, with different wards or service areas relying on a variety of staff (e.g. nurses, health-care assistants, ward clerks, volunteers) to encourage patients to leave feedback; the allocation of the FFT tasks in this regard are at the discretion of ward or clinic managers. Ideally, staff would give cards to inpatients on discharge and ask them to complete them before leaving hospital or to return them as soon as possible afterwards. However, discharge is a complex process and we heard repeated criticism from ward managers and matrons that this was an inappropriate point at which to ask patients for feedback. One deputy chief nurse recalled matrons telling her at a meeting that:
. . . there doesn’t seem to be a natural point in the patient pathway to hand out the cards. [. . .] Handing them out on discharge doesn’t work for [wards] – there are too many other things going on.
Field notes, 3 May 2017, trust B
In addition, staff often expressed the view that, because inpatients were asked for feedback on discharge, they used the FFT to rate and comment on the whole of their hospital experience, which may have included contact with several wards or services throughout their inpatient stay. Thus, the fact that the card was distributed at this point caused some people to doubt the validity of the data. Completed paper and card FFTs are collected from patients and are usually placed in FFT boxes found in ward areas; patients (or their carers) also post their cards into such boxes (Figure 2).
All trusts that used paper or card had systems for collecting and transporting questionnaires from wards and service areas to a central location. However, on some wards at trusts A, B, C and E, ward managers regularly looked through the responses they had received before sending them on elsewhere in the hospital. They did this to identify issues that required immediate attention and to photocopy particularly positive comments to relay to colleagues, who might then use them as practice-based feedback towards professional revalidation. At trust A, this ‘scanning’ of comments was standard practice; at trust E, according to the patient experience officer, it was only ward managers ‘who value FFT’ that tended to look through the cards at this stage. Administrators with responsibility for the FFT reported that those wards that looked at comments as they came in, ‘in near real time’, were often those where ward managers and their staff handed out cards regularly and, therefore, reported large numbers of responses, which in turn made staff more invested in the FFT. As the same patient experience officer said:
It’s usually [where] the sister [has taken responsibility] that it’s worked. ‘I hear you’re being discharged today, would you mind giving us your feedback and leaving it in the box as you leave’. Some areas don’t do that or just leave the card on the bedside or just have a pile near the exit and they don’t do as well.
Interview 001, trust E
At trusts A, B and C, staff and volunteers from the patient experience teams went to wards on a regular basis to collect patient responses. Trust C was particularly systematic: members of the patient experience team rotated around the five areas of the hospital to collect forms on specific days. At trust E, by contrast, ward staff themselves emptied boxes weekly or fortnightly and took completed cards either directly to the patient experience team’s office or to the hospital’s main reception, where a member of the patient relations team collected them. We note here that NHS England requires that the FFT results are processed and communicated by the trusts to NHS England via the relevant submission system (UNIFY2 system) monthly. For each area of service, trusts are required to submit:
. . . the total number of responses in each response category of the scale (extremely like, likely, etc); the total number of responses for each collection method; and the total number of people eligible to respond (for inpatients, A&E, and maternity question 2 only).
NHS England56
The FFT results are also included in trusts’ monthly ‘Open and Honest Care’ reports, which are submitted to NHS England and published on trust websites.
Through this initial description of the completion and collection of the FFT, we have highlighted the, sometimes elaborate, but always effortful, character of the endeavour. We turn next to the ways in which the FFT as information on paper becomes systematised and ordered through being processed centrally.
Friends and Family Test transformations
At each of the study site trusts, the information generated through the FFT went through a significant number of steps, transforming the information in the process. These transformations are important for understanding how data come to act and be expressed in multiple ways, thus revealing themselves as neither unitary nor stable objects. The many forms data take and the interactions in which they are enmeshed enable them to be understood as multiple, with corresponding multiple effects. This is an argument we make throughout the report. Below, we describe some of the key ways in which trusts manage the information generated through the FFT and the transformations this form of patient experience data undergoes while it moves around different sites in NHS hospital trusts.
The Friends and Family Test: from paper to electronic
At the trusts that used a paper or card FFT, these were converted into electronic format either in house or through a contracted organisation. At the two trusts where this conversion from paper to electronic format happened in house, staff or volunteers were responsible for transferring the FFT responses to a database (Figure 3). At trust C, the patient services staff who collect the forms from wards bring the forms to their office where volunteers organise them by ward and service. These sorted piles of forms are then labelled and placed on a shelf to await further processing. Volunteers take a pile, read each FFT response and transfer the information into Meridian, the data management software used by the trust. Volunteers transcribe free-text comments verbatim. If a comment includes the name of a member of staff, the name is retained in the comment only if the comment is positive; in the case of negative comments, the volunteer removes the staff member’s name. Once the data have been transferred to Meridian, the FFT forms are stored for 4 months in two consecutive locations in the patient services office. After this, the paper forms are destroyed.
At trust A, by contrast, a salaried data entry officer has the task of transferring information from paper FFT responses to Meridian; she processes approximately 400 forms per day. Moreover, the officer conducts some work on the comments while inputting them into the database: first, interpreting and rewording comments that she finds unclear; and, second, labelling each comment as positive, negative or neutral. The paper FFT forms are kept for several years, first on-site and then at paid-for storage offsite, before being destroyed. The other two trusts that used largely paper- or card-based FFT questionnaires (trusts B and E) engaged external contractors to read, collate and analyse their data, as well as design their FFT cards. However, at both trusts, significant work was conducted by trust staff and volunteers before forms were sent to the contractor. At trust B, Patient Advice Liaison Service (PALS) officers and volunteers collected cards from wards and organised and tallied them by ward (Figure 4).
Those who sort and tally the cards also scan for comments that stand out (whether very positive or very negative) and communicate those to the team leader, who passes the comment on to the appropriate ward or clinic manager. The handwritten tally of cards received by the ward is recorded electronically on an internal spreadsheet and the cards are then sent weekly or fortnightly to Picker for processing. The PALS head or team leader then compare this tally to the numbers provided by Picker post processing. As the team leader remarked: ‘we keep our own tally because sometimes wards are surprised when the Picker results come through showing a low number of responses; they believe they’d submitted more cards than they actually had’ (field notes, 21 June 2017, trust B). This team leader added that, although she would like to use this tally data to identify wards in which the rate is low or dropping significantly, she currently lacked the resources to do so.
Conversely, at trust E, this ‘counting’ was a formalised part of the FFT collection and pre-postage process and involved a member of staff from information services rather than patient relations. This person monitored the weekly count and communicated any drop to both the patient experience officer and the deputy director of nursing for action with matrons; she likewise passed on negative comments that ‘leap out’. After doing this, she posted packets of the completed FFT cards to Quality Health. We note here, therefore, that, in addition to patient experience data that the FFT generates relating to recommendation scores and comments (and answers to any supplementary questions), it generates a ‘count’, ‘tally’ or ‘response rate’ figure.
For trusts that use external contractors, the cards are processed and 2–4 weeks later trusts are notified by e-mail that the results of the previous month’s FFT are complete. In the meantime, depending on the contractor and the service provided, designated staff (e.g. at trust B, the head of PALS) can access a live dashboard online, which updates as the contractor processes responses (Figure 5).
For instance, once notified that the FFT results are complete, the head of PALS at trust B then accesses the results from the contractor’s site and sends a link to divisional directors (nursing and medical), matrons, ward managers, senior operational managers and clinical governance facilitators. This link enables these trust staff to download (1) a poster for display (Figure 6), (2) a report that includes benchmarking against other units in a division and a list of the free-text comments (the first page of which is sometimes repurposed as a poster; Figure 7) and (3) two spreadsheets with responses to the two free-text questions asked at this particular trust. Picker redacts names of staff in these reports; the head of PALS is provided with an unredacted version, which she then distributes to divisional directors only.
At trust E (which used Quality Health as a contractor), there was a more elaborate transformation of the data once they arrived. Here, data are supplied in several formats, one of which is ‘raw data’. These raw data are automatically uploaded into the trust’s own ‘data warehouse’, at which point they are subjected to a locally designed ‘location translation’ system. This helped correct for variation in names for wards or places used in patient comments (e.g. patients might refer to the same ward as ‘G32’ or ‘Ward 32’ or ‘32’, respectively). This ensured that the FFT results and comments were allocated to the correct area. Once ‘location translation’ was complete, the results were shared through the trust’s business intelligence platform. This was accessible to everyone in the trust because the data were not regarded as confidential (Quality Health redacts staff names during FFT processing).
The Friends and Family Test: collection by text message
We have so far described the process by which paper- or card-based FFT questionnaires are collected and transformed into electronic data that are ready to be communicated to a wider audience within trusts. Before we examine how this communication (which entails further transformation of data) happens, we present the different system in place at trust D, which mainly used text messages to administer the FFT. Whereas the paper-based method involves many different people and technologies in multiple interactions, the use of text messages leads to fewer interactions, particularly with trust staff.
The FFT question is sent out by text message to all patients within 48 hours of discharge (excluding inpatients on care of the elderly wards, who receive a paper-based FFT). If the patient replies to the first question, a second text message is sent asking for a free-text comment. The text message is sent by an external contractor (Healthcare Communications) and the data are stored on the contractor’s data management system, ENVOY.
Only a very restricted group of people within the hospital can access this system; therefore, the FFT results are not immediately available to all. Rather, the service user experience manager must send out a monthly e-mail to each ward or service area with their results. At trust D, the FFT is seen by the staff members we spoke with as something ‘external’ because front-line staff are not responsible for collecting it or encouraging people to respond; in the words of one participant, ‘it happens independently’.
Moreover, trust D also runs its own local inpatient experience survey, which generates a large number of responses, and the results of which ward managers are also required to access. The combination of these two factors (a ‘remote’ technology and a competing form of ongoing inpatient data) meant that the FFT as a form of data did not have the same prominence in patient experience data work at this trust as at others. This diversity of collection and recording practices demonstrates that the FFT is not a unitary ‘thing’ across trusts. It is experienced and understood differently within and across trusts depending on who or what interacts with it. Using a team of staff and volunteers to gather and input the FFT data endows the FFT with different characteristics than if it is collected by text message ‘offsite’.
The Friends and Family Test: dashboards and reports
One of the supposed characteristics of the FFT is its near real-time nature. Data are meant to be generated, analysed and reported in monthly cycles so that they can feed into the NHS England national reports that are publicly available online.
However, the monthly reporting is often subject to delays, particularly when the FFT cards are sent to external organisations for processing. This means that trusts that do not process the data in house access their results with a longer delay, meaning that publication on the NHS England website occurs monthly but often in arrears (i.e. results from March 2018 are available in May 2018). As we have seen, staff mitigate these delays by ‘scanning’ for comments before or at the time of processing.
Once the data are processed in the ways described, staff and databases produce reports containing different iterations of the FFT, which vary according to the audience for whom they are generated. Reports have two main functions: (1) relevant members of staff should consult them to become aware of survey results and areas for improvement on an ongoing basis and (2) they also orient discussions at meetings where patient experience data appear as an agenda item. These would usually be patient experience committees or working groups, quality committees and board meetings. The FFT data are also reported at subtrust level at clinical governance meetings, cancer management meetings and nursing one-to-one meetings, to name a few. Such reports often feature dashboards that show the FFT data ‘at a glance’, comparing wards’, departments’ or divisions’ recommendations and response rates over time, as well as benchmarking against trust averages or other comparable trusts (e.g. other trusts in the region, members of the Shelford Group). Below we present some examples of how FFT data transform in dashboards and reports and the consequences of these transformations.
At trust C, which uses volunteers to record FFT responses, the patient experience manager dislikes the Meridian platform, which is visible to all staff, claiming that its dashboards and reporting mechanism are not clear. Wanting to reorganise how she and others ‘see’ the FFT, the patient experience manager transfers the FFT data from Meridian to a Microsoft Excel® spreadsheet (Microsoft Corporation, Redmond, WA, USA), copying and pasting each comment. The manager then reads and codes each comment by valence (green for positive, red for negative) and by theme, drawing on the categories used to classify complaints. This spreadsheet, containing trust-wide data, is then e-mailed to ward managers, matrons and other senior nursing, medical and management staff. The manager then draws on a combination of this spreadsheet and the Meridian system to populate a report to the trust’s patient experience committee, which is attended by the director of nursing and senior clinical and nursing leads. The monthly patient experience report features the FFT in the following ways:
-
A trust-wide dashboard (Figure 8), which presents the recommendation rate (‘performance’) in groups such as inpatient, emergency and maternity, along with tracking over time and comments on response rates. This FFT dashboard is sourced from the trust’s Quality Governance Group, at which patient experience is also discussed. It follows a standard format that intentionally resembles the data in other areas of quality (e.g. complaints, falls, sepsis and never events).
-
A one-page ‘comments noticeboard’, which is a selection of that month’s positive and negative comments, chosen by the patient experience manager.
-
A ‘Friends and Family Test scores by area’ section in which the FFT ‘recommend scores’ for each ward and area are reported in order of performance (wards with the highest scores first). This is different from the Excel spreadsheet sent out to staff by the patient experience manager. Whereas that is ranked alphabetically by ward, this chart helps the committee identify which wards or areas are low performing. However, no account is taken of the total number of responses; thus, a ward may be listed as high performing on the basis of one FFT response.
-
An ‘FFT benchmark data’ section in which the FFT response rate and recommend/not recommend percentages are benchmarked for various departments (e.g. inpatient, emergency department) against selected trusts in the same region or of a similar size. The patient experience manager is well aware that FFT data are not intended for benchmarking purposes, but she includes this information because it reassures the trust about where it stands in relation to others.
We highlight here the way in which FFT data come to look different depending on the context in which they are placed, which helps to illustrate the multiple nature of each type of data. For example, in item 1 in Figure 8, the FFT data are arranged in a standardised dashboard. In another example reported below, two patient experience measures, FFT and a local inpatient survey, are placed alongside measures of harm and mortality (Figure 9). This example is an excerpt from trust D’s Integrated Quality Report papers. We draw attention to the fact that, in Figure 9, the different data look comparable, because of the way that they are presented (e.g. colours and graphic style). Differences between harms data and FFT data that might emerge in other settings are here almost elided and the features of this image may even suggest that they share some of the same qualities and can be read and acted on in similar ways.
The way in which the FFT is presented in reports and dashboards reflects and guides discussion of the FFT at various committees and therefore shapes what the FFT actually is in any given context. At trust B, for example, the visualisations of the FFT in the patient experience presentation to the quality committee focused almost entirely on response rates; the recommendation rate was reported only in benchmarking tables against other trusts in the region. At one meeting observed, the director of nursing, who chairs the quality committee, seemed exasperated about a long discussion about how to improve the FFT response rates. She said: ‘Getting the cards in is one dimension. But the other dimension is using the information on the cards . . . What are we doing about using the information?’ (field notes, 2 May 2017, trust B). Interestingly, the director of nursing received evasive responses from senior nurses, doctors and managers, all of whom focused on the way in which the FFT was visualised and communicated to them (e.g. the format of the posters or reports, or the inclusion or exclusion of comments). Whereas the director of nursing was attempting to expand the nature of the FFT beyond its definition as a ‘rate’ (whether response or recommendation) as presented in the papers or as data visualised in a report, other members of the committee were unwilling or unable to engage with this move, mainly because they disliked the FFT but refrained from saying so in this particular forum.
Wards also adapt the results they receive to their particular internal reporting needs. The quality improvement manager on an intestinal failure unit in trust D described how he takes the FFT data from the Excel spreadsheet he receives and reworks them into a more dynamic chart that tracks FFT data over a long period of time (Figure 10). This is because patients being cared for on this unit tend to stay for several months. Thus, the monthly report communicated to wards from the trust does not provide useful information and the number of responses in any month may be very small if few patients have been discharged.
These visualisations and dashboards are never seen by the staff or volunteers who collect or record the data. What they recognise as ‘FFT’ is very different from the ‘FFT’ presented and discussed at board or other committee meetings in the trust. It is important to appreciate this diversity in what the FFT looks like in different contexts and to different people in order to understand how and why different instantiations of the FFT lead, or fail to lead, to improvements in care.
The Friends and Family Test: noticeboards and media
At trusts A, B, C and E, ‘recommendation rates’ for wards and other units and a selection of the open comments were turned into printed material for display on ward noticeboards (and elsewhere, e.g. public area noticeboards) or were otherwise transcribed by ward managers, who often had responsibility for keeping noticeboards updated (Figure 11). Ward managers approached this task with some unwillingness as just ‘one more thing to do’. They generally believed that it was a pointless exercise because, in their experience, patients or visitors seldom looked at noticeboards. Some trusts also shared the FFT comments more widely, for example on the organisation’s official Twitter (Twitter, Inc., San Francisco, CA, USA) feed. As a member of the patient experience team at one trust explained, this boosts staff morale and public confidence in the service.
At trust E, the patient experience and engagement officer had responsibility for how FFT was displayed in public areas and in trust reports. She told us she takes the FFT results from NHS England’s site every month (rather than from the trust’s own data management platform) and prepares an infographic. The officer then uses this infographic in several ways: (1) as part of public ‘You Said . . . We Did’ boards at the main entrances to the hospital (Figure 12); (2) in a staff newsletter, which provides a summary of patient feedback (Figure 13); and (3) in the trust’s Integrated Quality Report presented to the board every month (Figure 14).
In some organisations, internal standards existed for what should and should not appear on the noticeboards. One trust (trust A) had specific guidance as to what the ‘Patient experience and safety’ noticeboard should look like and what information it should display. This included a yellow A5 card reporting the percentages of patients recommending and not recommending the trust in the previous month and then a breakdown of results for the response categories (e.g. extremely likely, likely). In other trusts, ward staff had a higher degree of flexibility around what information to display and how. At trust B, where there is no standard way of displaying information, ward managers may choose to use the FFT data however they wish. Therefore, some wards used the FFT free-text comments to populate their ward information boards, for example in the form of speech bubbles or a ‘You Said . . . We Did’ format. Wards might not display the posters provided by the external contractor or they might act creatively in relation to the posters they receive. As we saw in Figures 6 and 7, some ward managers at trust B repurposed the first page of a report as a public poster because the report shows how the unit compares (favourably) with others in the division, information that the official poster lacks. This ‘flexibility’ was principally borne out of a lack of resources on the part of the PALS team that has responsibility for administering the FFT; the team did not have time to check whether posters provided by the external contractor were displayed or to design and enforce a standardised format across the trust. At trust D, which collected the FFT through text messages, both the public-facing ‘ward boards’ (Figure 15) and the staff-facing ‘quality and safety information boards’ were standardised and their regular updating was assessed as part of the ward accreditation scheme. However, the ward boards did not display the FFT results, but the results of the trust’s own inpatient experience survey. The ward manager accesses the online system monthly (the frequency of access is monitored by the staff who administer the ward accreditation scheme), prints out the results (including comments) and displays them on the ‘Patient Feedback’ section of the ward board. Therefore, the FFT did not feature in public areas of the ward. Ward managers would sometimes use comments from the FFT on their staff-facing information boards; one ward manager on a care of the elderly ward explained that she put the FFT comments up to motivate staff and improve morale rather than to instigate any other action.
The variation in the mode of data generation and processing and in the types and numbers of data transformations described above are not exhaustive but provide sufficient detail to illustrate the multiplicity of patient experience data. By this, we mean that a particular ‘version’ (i.e. a paper questionnaire, a dashboard, a report, a photocopied comment) may be what counts as patient experience data for a particular actor in a certain context. In other words, we refer to the FFT as one form of data, but this ‘form’ is actually made up of a range of different ‘versions’ of it. Some of these ‘versions’, or entities, may be the main, or even the only, form in which patient experience data exist for particular actors. So, for example, the board of directors will interact with FFT mainly, if not exclusively, as a row in a table indicating response and recommendation rates, whereas a ward sister will interact with the FFT mainly as a pack of completed feedback cards to scan through for comments. Rather than treating the FFT as a singular, defined object in the world of hospital trusts, our ANT-informed approach allowed us to explore this multiplicity and how the different ‘versions’ of a form of data are involved in, and emerge through, different interactions. Although these are all named ‘FFT’, they are not the same thing. Knowing how FFT transforms and exists in multiple forms helps us to understand more precisely how improvements in care may result. We return to this in Chapter 6, after illustrating some examples of the co-ordination work that contributes to keeping data transforming and moving across the organisation.
Degrees of regularity in the collection, analysis and use of patient experience data: established and emerging patterns
Having looked at the multiple nature of all patient experience data at both the point of collection and the point of management, we now illustrate how certain forms of data (such as the NCPES, the FFT, patient experience walkarounds and patient complaints) showed some degree of consistency in the co-ordination of processes of generation, analysis and use of the information they provided (both across trusts and within each trust), whereas others, such as the dementia carer survey and patient stories, appeared to still be in the process of emerging as consistent practices. This variation in consistency is important in explaining why some types of data, although still constituting identifiable data formats with a name and a potential function (such as dementia carer surveys and patient stories), were much harder to observe and examine closely. For example, this could be because the format of the data might be undergoing revision, or their use might be in its infancy, or their very existence as ‘data’ might be debated.
In Chapter 3, we explained our rationale for choosing to focus in particular, at each trust, on the two clinical areas of cancer and dementia care. Here we explore how, in relation to the complex ontology of patient experience data, these two areas appeared to represent two ends of a spectrum of the regularity of practices. At one extreme, in the context of cancer care, we had the fairly established co-ordination of the NCPES; at the other, in the context of the care for people living with dementia, we had difficulties in identifying and following what could be counted as patient experience data. We think that these examples are illustrative of a different kind of variation in the extent to which organisations and the departments/divisions within them have established practices of patient experience data work. In the following sections, we illustrate the example of the NCPES in cancer care and that of patient/carer experience data in dementia care, and the features of less widely known data formats, such as ‘executive walkarounds’ and patient stories, to give the reader a sense of this variation grounded in rich descriptions of observed practices.
Established data practices: the National Cancer Patient Experience Survey
The NCPES is an annual survey that began in 2010. It is commissioned and managed by NHS England, and designed, implemented and analysed by Quality Health, a CQC-approved national private contractor. In October each year, Quality Health contacts all hospital trusts in England asking them for details of adult NHS patients with a confirmed primary diagnosis of cancer who were discharged from an NHS trust after an inpatient episode or day-case attendance for cancer-related treatment in April, May and June that same year. In most trusts, the cancer services administrative team and the lead cancer nurse liaise with Quality Health to create this list. Staff remove duplicate entries from the list (e.g. if the same patient was discharged more than once during this period) and those patients who have died; patients are also coded by tumour group. These sublists of patients by tumour group are sent to the tumour-appropriate CNS, who determines whether or not the patient might be harmed by being contacted to complete the survey (e.g. if it would cause the patient distress or if the patient was receiving end-of-life care). As the CNSs are key workers for most of the patients, they are regarded by cancer managers and lead cancer nurses as competent to make this determination. Although the lead cancer nurse at trust E acknowledged that there was a small risk of CNSs excluding patients who they thought were likely to give negative responses to the survey; while noting that it is not possible to predict how a patient will complete the survey, she felt that it would also be obvious if nurses were deliberately excluding large numbers of patients.
The list of eligible patients is sent to Quality Health, which compiles a sample of patients to contact. The 2016 survey consisted of 59 reportable questions covering the whole patient pathway and had free-text boxes for patients to leave comments. 57 Questionnaires were posted to the sample of patients, who were also given the option of responding online. We note this here by way of background; we did not look at the work of Quality Health as part of our fieldwork, which was entirely hospital-based.
The results of the survey are shared with trusts the following year, approximately 12–14 months after the surveyed patients were discharged as inpatients. From our observations of practice and conversations with key staff members at cancer services at all of our study sites, it is the lead cancer nurse who has responsibility for the pre-survey work outlined above and the post-survey work of reporting the data within the trust. From the point of view of the lead cancer nurse, the report consists of two elements: ‘results’ or ‘scores’, and ‘comments’. The results are published on the NCPES website and are publicly available; the comments are provided privately to each trust in a password-protected document. The results section details the trust’s score for each question, together with data comparing the results with those of the previous year and with the range of results nationally. Towards the end of the document, the results for each question are broken down by tumour group (12 named categories and ‘other’) with scores for ‘this trust’ and ‘nationally’ placed side by side in tables. Those tumour groups reporting fewer than 20 respondents are not given scores and are instead placed in the ‘other’ category. Thus, the ‘results’ section of the NCPES report contains a prominent element of benchmarking against national results, both in general terms and with regard to specific tumour groups. The ‘comments’ section reports all of the free-text comments provided by respondents and is broken down by tumour group.
The ‘official’ report by Quality Health is not necessarily seen by most staff. At some of the study sites, the lead cancer nurse, in collaboration with cancer services, produced a local version of the report. For example, at trust B, following the publication of the 2016 NCPES results,57 a cancer services administrator created a specific report for each tumour group (Figure 16).
This report contained a table of several elements: the tumour group results and the ‘all cancer’ results for 2015 and 2016, both nationally and for this trust specifically. The administrator added green upwards arrows and red downwards arrows in the column listing the trust’s 2016 results for that particular tumour group, showing ‘at a glance’ whether or not there had been an improvement compared with the previous year. The patient comments, organised by tumour group, were added to this document. At another trust, the lead cancer nurse summarised the process of making sense of the survey results as follows:
My main role is to turn the findings from the cancer patient experience survey, the national cancer patient experience survey, turn that into a work plan across the trust, which is then put into action. And we have had some notable successes over the last few years with that and where we’ve turned, where we’ve picked up issues that have come from that and turned it into action.
Interview 007, trust A
Practices of this initial ‘organising’ work differed slightly from trust to trust and from year to year, depending largely on the approach taken by the lead cancer nurse and what the data showed. At trust E, for example, the lead cancer nurse usually organises the comments by theme and by tumour group. However, in 2016 she did not do this because, in her view, ‘there isn’t one thing that’s more prominent than others’. Thus, the data themselves determined the form of the document she circulated. In the next section, we discuss the interactions in which the multiple entities constituting a type of data became involved at the study sites, and the links these interactions had with care improvements. For now, our aim is to offer a rich description to highlight how the multiple nature of each form of data cannot be overlooked if we aim to understand, in depth, how improvement is enabled.
At trust E, the NCPES was also presented to the board. This did not take the form of either the NCPES official report or the locally produced ‘rearranged’ reports. Here, the lead cancer nurse wrote a paper detailing the importance of the survey and its key messages for the trust, providing explanations for good or poor results and presented the initiatives in cancer care to improve quality. The paper did not discuss or present all of the results from the survey. Instead, it contained a visualisation taken from the regional Quality Observatory cancer patient experience dashboard showing how this trust compared with others locally on three measures: how many of its results were (1) significantly higher than, (2) not significantly different from or (3) significantly lower than the national average (Figure 17).
This visualisation was placed in the context of a description by the lead cancer nurse, explicitly referencing the publicly accessible online NHS Cancer Dashboard, which brings together a range of cancer-related quality metrics and allows comparison at the Clinical Commissioning Group (CCG) and provider level. The Cancer Dashboard uses seven questions from the NCPES to populate its ‘Patient Experience’ section and Quality Health structures the executive summary for each trust’s NCPES report entirely on the questions displayed on the cancer dashboard. 58 The paper prepared by the lead cancer nurse made its way into the board papers as the main findings of the survey. In addition, the lead nurse highlighted the two areas where the trust’s performance was below the expected range. However, the Cancer Dashboard emphasises the identification of and contact with CNSs as a way of measuring good patient experience (two of the seven indicators refer to CNSs), giving prominence to this dimension of experience in a way that the report of survey results does not (only three of the 59 questions reported in Quality Health’s results document for trusts refer to CNSs). This refers to our earlier point on the multiplicity of each type of data and is significant because it shows how a particular ‘version’ of the NCPES in its various transformations may influence which roles are regarded as key to improving patient experience. In our example, we see how, in the context of the trust board we discuss, the NCPES is closely allied to the Cancer Dashboard and the work of the regional Quality Observatory, with an emphasis on CNSs that might shape the way in which the results are used (CNSs are heavily involved in both pre-survey and post-survey activity, as we will discuss further in Chapter 6).
In our detailed discussion of the NCPES, we have described a type of data, a survey in this case, that shows some degree of regularity in the practices that revolve around its collection, analysis and use to implement change. We have also shown how, despite this relative regularity, particular ways of presenting the data may give more prominence to some specific aspects to specific dimensions of experience over others. In the next section, we use examples from dementia care to describe a case of significant lack of regularity in patient experience data practices.
Less established data practices: patient and carer experience in dementia care
Unlike with cancer care, there is no official national patient experience survey of patients living with dementia. There is also no common pattern of CNSs or clinical leads of services conducting locally designed patient experience surveys (as in many cancer specialties). There is also less uniformity in the kind of data that are collected in dementia care and less formality in the way in which they are reported. In the following pages, we present our findings from the fieldwork we carried out in this area at the five trusts in order to illustrate what less-established data practices might look like. We described in the previous section how the collection, analysis and use of the NCPES data are enacted with a degree of regularity, a relatively established choreography (e.g. CNS action plans, reports to trust board) and a fair degree of similarity between the trusts. This co-ordination around and through the data is less apparent in the context of care for people living with dementia. Here, the clinical effects of cognitive impairment make documenting and interpreting experiences particularly problematic. On the most basic level, long questionnaires are simply not appropriate in many cases; complex-care needs involving long hospital stays may confound the feedback picture, and the importance of hearing from patients as well as from their carers requires additional resources. Even simply identifying patients with a diagnosis of dementia across the hospital may be a struggle in some cases (due to inadequacies in the electronic patient record and its coding system, and sometimes due to frequent care transfers) and it remains unclear which types of data would be most significant in the context of significant cognitive deterioration.
Formal systems to account for the specific experiences of patients with dementia are currently limited to the National Audit of Dementia (NAD), which measures the performance of English and Welsh hospitals against criteria relating to aspects of care delivery for patients with dementia. Although it does not collect experience data from patients with dementia themselves, it asks participating hospitals to carry out a survey of carer experiences of quality and care. The survey was a new element of the NAD, introduced in its third round of operation in 2016; it was retained for the fourth round of the audit that took place in 2018–19. Although the NAD is not mandatory, 98% of hospitals eligible to participate in 2016 submitted data for all or part of the audit [see executive summary at www.rcpsych.ac.uk/docs/default-source/improving-care/ccqi/national-clinical-audits/national-audit-of-dementia/round-3/nad-round-3-executive-summary.pdf (accessed July 2019)]. Likewise, most trusts did not seek feedback from patients living with dementia as an identified and differentiated group of people. Nevertheless, patients with dementia (or their carers) completed the FFT and participated in other national and trust-wide non-specialty-specific activities to gather patient experience data (e.g. the CQC’s National Adult Inpatient Survey). One of these trust-wide initiatives is a carers’ experience survey, and at four of the five study sites (A, B, C, E) it has been used to target those carers looking after patients living with dementia. The carers’ experience surveys are locally designed and show a great deal of variation in the kinds of data they provide and how these data are generated and transformed. Nevertheless, in the context of care for people living with dementia, the only type of data that showed some degree of consistency in its collection, analysis and use was the carer experience survey. We discuss this survey here to show how much more fragile the co-ordination of the processes relative to these data appeared than that described for the NCPES. Because there is no standardised way in which data from carer experience surveys are generated and transformed, we provide contrasting examples from our study sites that highlight important facets of this sort of data.
The carer experience survey at trust B
At trust B, the carer experience survey had two principal versions that began running in mid-2015: an online questionnaire and a telephone survey. This illustrates how this survey, like the NCPES and the FFT, also transforms, becoming different objects at different stages. The dementia lead nurse at this trust reported that, although John’s Campaign (an initiative in which trusts pledge to support the right of carers to stay with people living with dementia in hospital and the right of people living with dementia to be supported by their family carers while in hospital, https://johnscampaign.org.uk/; accessed 18 September 2019) was happening at the same time, their carers’ questionnaire arose as a result of a Carers Week event that had been held at the trust. At that event, it had become apparent that carers had no effective way of feeding back to the trust about how it might improve services for carers and for those for whom they care. The trust also participated in the dementia/delirium Commissioning for Quality and Innovation (CQUIN) framework (which supports improvements in the quality of services and the creation of new, improved patterns of care) scheme for 2015–16, an element of which specifically referred to harnessing the views of carers. Although the carer experience questionnaire was originally designed by the trust’s lead dementia nurse and the trust’s web editor to be completed solely online, the nurse and her colleagues at the time realised that the online format would not necessarily suit their particular cohort of patients and carers. Therefore, they also launched a telephone survey as part of the dementia CQUIN. For this version of the survey, a volunteer telephones a sample of carers of people living with dementia who have recently been discharged from an inpatient stay. The telephone questionnaire is almost identical to the online version, with small changes to the wording of questions to account for an oral delivery rather than a written communication. The questionnaire asks 19 questions (of which five are demographic questions) that require carers to choose from a range of responses and offers the option of leaving free-text comments after every question. The identification of potential respondents and the recording of respondents’ answers are, in themselves, complex processes. We look at them here in some detail to demonstrate the work that goes into creating patient or carer experience data and how, in some contexts, trust staff generate their own understandings of what data are and how to collect them.
The volunteer receives a specially requested ‘download’ of patient data from the trust’s data warehouse. These data relate to patients who have been coded with a primary or secondary diagnosis of dementia and who were inpatients and have been discharged. Although the volunteer receives a ‘download’ every week, data are collated at monthly intervals to use in the following month. The volunteer uses a spreadsheet to organise these patient data to which he applies the following exclusion criteria: patients with an inpatient stay of < 3 days and patients who have been discharged twice in the same month. Patients who are reported deceased and patients who might be duplicate entries are also excluded. These criteria have codes that the volunteer applies to each patient entry (e.g. patients who have died are coded with a number 5 and then coloured red). In addition, those patients whose carers were contacted to complete a questionnaire during the NAD are excluded. More generally, the NAD plays a structuring role in this sampling phase; the volunteer applies the NAD’s exclusion criteria to the telephone survey. The ‘download’, however, lacks an essential piece of information that the volunteer needs to conduct the telephone survey: the contact details of the patient’s next of kin, who is assumed to be the patient’s carer. This is obtained from the trust’s information system by matching patient numbers. The volunteer uses the trust’s system to check whether or not the patient is still alive the day before he makes the survey call. The volunteer aims to make 20 telephone calls every month, although, as a result of other work on improving patient experience, this has recently averaged at five calls per month. He explained that he uses the codes and records the exclusions to show the lead dementia nurse (if she ever asked) why, if 150 patients with dementia are discharged every month, he only contacts 20.
When the volunteer is on a call, he reads out specially designed telephone survey questions and records the responses online. He initially used the online version that carers themselves saw, but realised that the questions needed rephrasing to take account of the fact that they were being spoken over the telephone rather than read; carers found it difficult to understand what he was asking them. The volunteer has had to refine how he approaches the call. He reminds us that it is a cold-call: ‘[the carer] hasn’t asked to receive it. It’s different to them providing feedback online, which they’ve chosen to do.’ The telephone survey also offers the chance to probe and to offer signposting: ‘Sometimes you’re discussing a situation with a carer on the phone and you try and find out if they’ve already taken it up with PALS. If not, you offer them PALS contact information. This isn’t something that is put in the same way on the online form’. As he talks to the carer, he jots down notes on a pad and after the call is finished, he writes up a ‘summarised’ account of what the carer said; he says, ‘sometimes they talk for ages and talk about a lot of things. You can’t write all of that down – it’s too much. So I summarise it’ (fieldnotes, 1 March 2017, trust B).
Once the telephone survey has been completed online and submitted by the volunteer, the response goes to the trust’s web editor, who sits with the communications team. The web editor is responsible for the content of the trust’s website but also carries out ad hoc work on a few online forms, including the carers’ questionnaire. The web editor worked with the volunteer to redesign the online survey to make it more suitable for delivery over the telephone. The response arrives to the web editor in an e-mail in an encrypted format, which the web editor must then decrypt. The web editor regards this as unnecessary additional work. As a result, the web editor rarely looks at his e-mail. Every month, he exports the information from these e-mails to an Excel spreadsheet; without examining the detail of the data, he sends the spreadsheet to the lead dementia nurse.
These data are reviewed every 6 months by the carers’ lead (a matron), the clinical lead for dementia (who is a consultant geriatrician) and the lead dementia nurse. The carers’ lead writes a report annually (Figure 18), identifying key themes and possible areas for improvement. This is presented at the Dementia Strategy Group and the Carers’ Committee.
As noted above, during the latter part of our fieldwork, the volunteer who carried out the telephone survey was finding it increasingly difficult to find the time to conduct it. As a result, and over a period of several months, the number of survey responses had declined; there had also been a decline in the number of responses directly received online. Between January and September 2017, the trust had received 24 responses to the online questionnaire and the volunteer had completed 14 telephone questionnaires. At the Carers’ Committee meeting in September 2017, which we observed, the carers’ lead reported these figures and reported that she and the clinical lead for dementia had decided to make the carers’ questionnaire shorter; they regarded 14 questions as too many. The carers’ lead decided ‘to draw a line under this year’s questionnaire’ because of the small number of responses. She informed the committee that she would not write a report that year and that they would begin afresh once the new questionnaire had been rolled out.
This example demonstrates the more informal, contingent processes involved in collecting and acting on dementia carer experience data. The volunteer’s work in structuring the survey, how it is administered and to whom is also subject to change given other demands on his time or changing strategies within the dementia and carer staff teams. The limited capacities of the data to act are demonstrated by the relative ease with which the data for one year do not need to be reported.
The carer survey at trust A
Another example that illustrates the relative fragility of the carer survey as a type of patient experience data is that of trust A. Here, the questionnaire comprised six questions:
-
details of hospital admission (ward, month) and ethnicity of the patient
-
receipt of a relevant information booklet during admission (yes/no/do not know or cannot remember, with box for comments)
-
whether or not the staff were skilled in understanding the needs of person with dementia (yes all staff/yes some staff/not many staff/not at all, with box for comments)
-
involvement in decisions about the care and treatment of person with dementia (always/sometimes/no/do not know, with box for comments)
-
involvement in plans for discharge (yes always/yes sometimes/no/do not know/not applicable, with box for comments)
-
interest in being contacted by a relevant dementia-focused charity for support (yes/no, with line for postcode and telephone number, and box for additional comments).
At this trust, the questionnaire was administered either by volunteers or, when volunteers were not available, by a member of the patient experience team. The collection and processing of these data presented a number of challenges. First, owing to frequent errors in coding in the electronic patient record, it was impossible to determine exactly which ward patients with a diagnosis of dementia had been admitted. Second, it was possible that patients with a recent diagnosis of dementia had not yet been told about their diagnosis, making it more difficult for carers to relate to the questions in the survey. For the collection of these data, often the person administering the questionnaire opted for a tour of a selection of wards to try to identify, with the help of the ward sister/manager, potential candidates for completion of the questionnaire. The questionnaire was administered either in person (following identification of patients and carers on the wards, as described) or over the telephone following discharge from hospital. At the time of our fieldwork, the dementia carer survey had been carried out for three 1-year cycles and was undergoing review. The survey data were collated and summarised in a report by a member of the patient experience team; the report was then circulated to the members of the Dementia Steering Group and the divisional director of nursing responsible for patient experience. The 2016/17 report included some information on the survey background, the number of responses (just under 90), pie diagrams of answers to each question, key themes from the analysis of comments, and a sample of comments under each key theme. Nevertheless, the survey itself was subject to refinement and contestation. For example, at one Dementia Steering Group that we observed, the wording of the questions was challenged on the basis that references to ‘dementia care’ were unhelpful because ‘dementia care’ did not comprise a set of practices recognised by carers themselves; a review led to the rewording of all questions that mentioned this phrase.
The one-time-only carer survey at trust E
Finally, a survey of carers of patients with dementia was carried out in the spring of 2017 during the fieldwork phase at trust E. This was not an ongoing survey, unlike the two described above. The equality and diversity lead at the trust wanted to find out about the experience of carers and young carers of inpatients and about how ward staff supported carers. By sitting in the adjoining room, the trust’s dementia CNSs had learned about these survey plans through the comings and goings between offices. The dementia nurses asked the equality and diversity lead if they could ‘piggyback’ onto her general carers’ survey with a supplementary and specific dementia carers’ survey, as well as a staff evaluation of how they supported carers of people with dementia. She agreed and they worked together over a period of several weeks to distribute the questionnaires to staff and carers on wards at the trust’s two main hospital sites. They collected the responses and then asked the patient experience data analyst, who works with the equalities and diversity lead, to collate the data and produce charts to visualise the results. The dementia CNS who was leading this work was dissatisfied with the way that the data analyst had originally visualised the report, saying that the bar charts he had initially used were not clear; he replaced them with pie charts, which she preferred. For the dementia CNS, the survey findings showed that, whereas staff believed that they had a lot of knowledge about dementia carers’ needs and provided them with good care, the carers felt that their needs were not being met adequately. The CNS presented the report to the trust’s Dementia Steering Group in July 2017. In a conversation with us the following day, she said that there had not been any discussion after her presentation at the meeting. For her at least (and perhaps for others attending the Steering Group), these findings had not been a surprise; they knew from training events and other activities that there was a divergence in staff’s and carers’ evaluation of care. At the time of research, there was no plan to repeat the survey in the foreseeable future.
As we stated earlier, the carer survey (in its variants) is one of the relatively organised forms of data at four of our five study trusts (A, B, C and E). However, the descriptions above already exemplify how much less consistent the process of generating, processing and reporting these data was than the NCPES in cancer care. In Chapter 6 we will discuss the lack of clarity about how this survey triggered improvements in care. Here we highlight that the carer survey showed more marked variation and a degree of fragility across and within trusts, respectively. The working practices surrounding the survey were less organised than those for the NCPES. There was more ‘trial and error’ in the initiatives used to collect data and more willingness to ‘start over’ or redesign if the survey process showed limitations. This is partly a consequence of the non-standard nature of the survey and the control that members of staff have over the survey itself. However, even when the survey was recognised as successful, nurses felt that they could extract little information from the data that they did not already know. Significantly, the carers’ experience surveys often existed in the shadow of a generalised failure to devote resources to elicit adequately the experiences of patients living with dementia themselves.
Other data
The cancer and dementia examples above illustrate two extremes of a spectrum in terms of the regularities in their associated practices. Somewhere along this spectrum we observed a number of other types of data that contributed to the patient experience landscape at each trust. Below, we outline the main features of bespoke local surveys (which we encountered at all trusts), executive walkarounds, formal and informal complaints and compliments, and patient stories (at trust A) to provide a richer picture of the variation in data formats and practices across the trusts we visited.
Bespoke trust-specific surveys modelled onto the CQC’s National Adult Inpatient Survey were carried out at all trusts. Local inpatient surveys essentially provided feedback in the same domains of the national inpatient surveys but with a quicker turnaround and with the inclusion of information specific to a particular trust. As one patient experience team member at trust A explained:
. . . we do an internal inpatient survey of 20 questions that volunteers go out and actually meet inpatients at the bedside while they’re in the middle of their stay and that also gives us a good idea about specific areas. [. . .] . . . you’ve got 20 questions or 20 views from one patient, and as you’re collecting that live [. . .] throughout the month, then you can report that straight back to again, the wards or the board, and they have a good idea about what’s going on on the wards, directly from the patients.
Interview 109, trust A
These surveys presented a certain degree of regularity, in that they were an ongoing process generating information that was collated and summarised/reported at regular intervals (often monthly). However, the target number for completed questionnaires each month was usually flexible and the collection itself was usually allocated to volunteers, with all the variability that this entailed (number of volunteers available for a particular task may change depending on the time of the year, for example when volunteers are students with academic commitments, or on changes in volunteers’ personal circumstances).
All trusts in our study also carried out some form of walkarounds. At trust A, to use one example, patient experience walkarounds (also called executive walkarounds or walkabouts) were a relatively well-established form of patient experience data and distinct from patient safety walkarounds. Patient experience walkarounds were carried out by a team that included members of staff (an executive director, a non-executive director, a member of staff with managerial responsibilities), a public governor and a patient representative, on a rotation basis. The walkarounds showed a fair degree of consistency: they were carried out fortnightly, a different ward/service was visited each time, a rota detailed the teams carrying out each walkaround, a standard walkaround form was completed during the visit, patients on the ward visited were asked a few questions about their care, a brief meeting was held at the end of the visit to identify key actions and add these to the form, and then the form was returned to the patient experience team for reporting (Figure 19).
At all trusts, formal and informal complaints also showed a significant degree of regularity and consistency in the ways in which staff followed steps to follow them up and address the issues they raised. All trusts in our study had highly codified procedures to deal with complaints and had systems in place to ensure that set timeframes were adhered to. However, at two of the five trusts we studied (trusts A and D), complaints were handled by a team that had little to do with other aspects of patient experience data work or with the patient experience team, where this existed. Patient experience teams also dealt with compliments and passed these on to relevant staff. However, compliments showed significantly less regularity than complaints. Usually, thank-you cards from patients were delivered directly to ward staff and displayed on a noticeboard for all ward staff to see. At one trust, a dedicated e-mail address allowed people (patients as well as staff) to share positive feedback with the wider trust. Other trusts attempted to keep central records of compliments received by encouraging wards and units to report cards, letters and presents; these then featured on directorate dashboards produced by the patient experience team.
Some of the data formats we observed have very little regularity and/or are still in the process of being developed; however, this does not necessarily prevent them from generating a positive impact on the quality of care for patients. A good example of this is the patient stories format at trust A. At the time of our fieldwork, the patient experience team was about to complete their third patient story. The first patient story was developed in 2015 around the experience of one particular carer (whom we will call Louise) who felt strongly about improving the experience of care for other patients as she could not do it for her husband, who had been an inpatient at the trust’s hospital. In Louise’s words:
. . . you know, the one person I wanted to change things for wasn’t going to be there any more, but I wanted to change things for other people . . . [. . .] They had a project in mind and that was to write a patient story and asked me would I be interested. [. . .] So I wrote, it was very cathartic anyway, I wrote it down and I showed [Patient Experience team member] and I thought they were just going to take excerpts from it and so I’d just written it how I felt, you know, but they wrote it just as I actually presented it. And even recommendations and, they were my recommendations. So it was just all my story . . .
Interview 008, trust A
This patient story was produced as a booklet and Louise was invited to present it in person at the board meeting. The story was then used to inform a number of training sessions for staff, led by the practice development team. The patient story initiative was approved by the director of nursing responsible for patient experience. At trust A, the patient experience team is responsible for identifying potential candidates for patient stories and also for deciding where the story in its final format (booklet or video) should be presented. Following her involvement with the patient story initiative, Louise had also become a patient representative within the trust. At the time of our fieldwork, Louise had discussed with the patient experience team possible ways to monitor whether or not care on the wards had improved on the specific issues highlighted by her story. The patient experience team responded by designing an audit that would be carried out by Louise herself, with the team’s support where requested or useful, and she would decide when she was satisfied that the main issues she had experienced in the care for her husband had been addressed to a sufficient extent. At the time of completing the fieldwork, the audit, consisting of a short questionnaire to administer to inpatients, was being carried out by Louise with the support of one of the patient experience team members. Our notes from a conversation between a researcher from our team and the patient experience team member co-ordinating the development of patient stories (Sarah, also an pseudonym) report that staff at this trust considered action on patient stories to be the responsibility of the patient experience team but also to have a place in the context of the trust’s experience strategy:
. . . there is no point sharing a story with the Board if the actions for improvement are not carried out. And these actions do not sit with the Board but with Patient Experience team and the practice development team. Louise’s story is now due to be presented to Board again with evidence of changes and data from the audit. Patient stories are considered part of strategies and values of the Trust.
Field notes, 9 June 2017, trust A
Patient stories at this trust were a good illustration of a type of patient experience data that presented less regularity – the team did not know when a suitable candidate for a story would be identified, the format may vary depending on whether or not the patient was willing to be filmed and the content of the story may vary depending on who was responsible for the editing (Louise’s story was not edited, but filmed stories were) – but at the same time also reached a variety of audiences, from staff in training to executive directors, largely unchanged.
So far we have described the variation in patient experience data types and processes at the trusts that took part in our study. We have deliberately started by foregrounding the non-human actors and their transformations and we will say more about what these actors do and what other actors they interact with in the next section. However, the picture we provide here would not be complete without a presentation of the human resources that the trusts deployed directly into their patient experience data work.
The work of patient experience teams
In approaching our fieldwork, one of our early concerns was to understand how trusts operated with regard to responsibility for their patient experience work, and for data work in particular. We sought to identify, in the first instance, who within the trust had organisational accountability for this work and how particular teams and patient experience responsibilities were aligned at each trust. To do this, we relied on organisational charts and our local contacts. We discussed formal structures and responsibilities for patient experience data with our informants at each trust and examined strategy documents and minutes from committee/steering group meetings to obtain a fuller picture of how patient experience practices worked. All five trusts operated differently when it came to the practical, centralised organisation of patient experience work, especially in matters of data collection and management. Three of the study trusts (trusts A, C and E) had formally designated patient experience teams. These teams were involved in collecting, collating and monitoring the FFT (as described above) as well as conducting other forms of patient experience work. This included designing and administering trust-wide inpatient surveys, collecting patient stories, reporting on the results of the National Patient Experience Survey Programme and co-ordinating action plans in response to these, as well as taking part in trust service steering groups such as nutrition, estates, equality and diversity and patient information review. Some were also involved in bereavement services, in arranging interpreters and in assistance for patients with disabilities. Members of patient experience teams had a range of backgrounds and non-clinical skills; many had worked in various clerical and administrative roles in trusts. In one trust, the patient relations manager was a former nurse. Some teams were managed by a non-clinical head of patient experience, who reported to deputy directors of nursing (trust C); one team (at trust A) was managed by a divisional director of nursing. At all trusts, executive responsibility for patient experience lay with the executive director of nursing.
At the three trusts with designated patient experience teams (trusts A, C and E), a key forum for the discussion of patient experience data and issues was a Patient Experience Committee or Steering Group. Convened at regular intervals (monthly to quarterly), these meetings allowed for the discussion of matters arising in relation to patient experience data, of patient relations/PALS activities and of updates from executive directors. In addition to the activities listed above, trusts C and E had subgroups within the broader team that handled complaints management and/or investigations either as a PALS or independently of PALS. At trusts B and D, which did not have formally named or designated patient experience teams/head of patient experience, patient experience data and related issues were discussed at general ‘Quality Committees’, which looked at all three aspects of quality (patient experience, patient safety and clinical outcomes/effectiveness).
Trust B had a PALS team that carried out many of the functions that were elsewhere undertaken by patient experience teams. At this trust, the head of PALS described herself also as the ‘head of patient experience’ but lacked the official job title. Indeed, the person who had occupied the role previously had held the title of ‘head of PALS and patient experience’ before trust management had decided that, in order to lend the work weight, ‘patient experience’ ought to be the responsibility of a senior nurse rather than a non-clinical administrator. Since this decision, changes in trust personnel and roles had meant that ‘patient experience’ was now once again more recognisably part of the area of responsibility of the head of PALS’.
We note this here to show how titles, roles and names of teams may obscure the actual work that goes on in the organisation: in many respects, the work that the PALS team did at this trust was indistinguishable from that of specifically designated ‘patient experience teams’ at other trusts. Nevertheless, such naming might also signal something of how organisations see and value certain types of work; certainly, the head of PALS and her team at trust B found it challenging to manage complaints as well as (from their point of view) conduct effective patient experience work.
There are two areas in which this trust, where the PALS team also carried out patient experience data work, might be fruitfully compared with trusts A, C and E, each of which had a designated patient experience team. One is its lack of a dedicated ‘patient experience officer’ role; this officer would visit the wards and clinical service areas and be able to connect various aspects of the operational patient experience work (those listed above). The second is the area of skilled data management and analysis, for which trusts A and E, which possessed patient experience teams, employed a dedicated person. With regard to the former difference (i.e. the lack of a dedicated patient experience officer), it is worth reporting that at trust E, the patient experience and engagement officer was heavily involved in FFT-related patient experience data work (collection, reporting), monitored and reported on the trust’s own inpatient survey, chaired the patient information review panel, attended the Patient Experience Steering Group and facilitated a standing patient advisory panel. This meant that the head of patient experience could take on a more strategic role, which was not a possibility for the head of PALS at trust B. As for the second difference, the availability of a dedicated ‘data person’, this took the form of, at trust E, an information analyst within the patient experience team, and, at trust A, a data entry analyst, whose principal task was to input information from the FFT questionnaires into the data management software, as described on in Friends and Family Test transformations. The information analyst at trust E pulled the FFT data from central systems and conducted content analysis on the comments. He designed and populated dashboards showing different forms of patient experience data and also provided support to members of his team and other colleagues throughout the trust. We also note that trust E’s information services manager (who did not sit within the patient experience team) also played a role in analysing and communicating the FFT scores and comments and liaises with some members of the patient experience team. In this way, the patient experience team could draw on people in dedicated roles who analyse and organise data in creative ways; this was not present in the other trusts’ teams.
Trust D, the other trust without a formally constituted patient experience team (see Table 3), saw an association of professional figures at corporate trust level [the lead nurse for corporate services, a corporate matron, members of the QI team and the assistant director of service user experience (the last two being non-clinical)] manage the patient experience data work. These members of staff worked together formally and informally to embed patient experience work across the trust through various mechanisms, including patient experience learning sessions (with an associated Steering Group) and a long-standing ward accreditation scheme (which we discuss further in Chapter 6). Complaints at this trust were handled by a separate team. This lack of a patient experience/’PALS as patient experience’ team is an organisational choice, as the director of nursing mentioned at trust D’s JIF in February 2018. Front-line staff are encouraged to collect, analyse and act on patient experience data themselves in contexts facilitated by the professional figures listed above. In contrast to what we observed at the other trusts, this group of people comprised both senior nurses and other staff members versed in the theory and application of QI techniques [e.g. plan, do, study, act (PDSA), Lean, Microsystems).
The information on patient experience teams provided here is not intended to be exhaustive. Our aim was to draw out salient contrasting features that exemplify the range of undertakings, roles, skills and involvement in other trust activities of staff engaged in patient experience data work. This might further our understanding of how the diverse teams we described can (or fail to) provide a link between patient experience data and improvements in care for patients.
In Chapter 6, we examine how the aspects of patient experience data and data work we have illustrated here (the multiple forms data can take, the more or less co-ordinated character of their transformations and the local configuration of patient experience data work responsibilities) play out in leading to action for improvements in care.
Chapter 6 Findings 2: what data do and how they do it
In this section we explore the links and associations that turn patient experience data into action for QI. We structure this section in three subsections. In the first of these subsections, we look at two different ways in which data act. First, having used the example of the FFT in Chapter 5 to show how each type of data is multiple, we return to the FFT to illustrate how data can lead to action by virtue of their multiple natures (i.e. how the different forms that the FFT takes can lead to improvements via different paths). In the second example, we turn to the NCPES and show how its widely recognised limitations provide an impetus for staff in cancer services to devise strategies to obtain the information needed to improve care.
In the second subsection, we provide four detailed examples of the ways in which links between patient experience data and quality improvement are created and sustained. Through these examples, we show how these links are made possible through the emergence of three qualities (i.e. autonomy, authority and contextualisation) that characterise the relationship between data and human and non-human actors. In these examples, we consider non-human entities, such as other data, accreditation systems and external organisations, as having as equal a role as people in these processes.
In the third subsection, we bring together the points from the preceding subsections through an extended example of the handling of the NCPES at one study site. We show how the multiplicity of the NCPES as a discrete, named form of data enrols various actors in the attempt to make the NCPES result in improvements in the quality of care. The example demonstrates that, where the link is established, the specific qualities of autonomy, authority and contextualisation are present.
In illustrating the various aspects of the work that data do and the work that is carried out with data, we present practical distinctions between the limited number of formal, planned projects under way at any one time as defined, determined and supported with resources by organisational leaders and processes (which we refer to as ‘formal QI’), and the day-to-day, informal improvement work that goes on (often at an individual level) outside those processes (which we call ‘everyday QI’). We return to this theme and explore it further in Chapter 8.
Data as multiple: the different effects of the Friends and Family Test
In Chapter 5 we used the example of the FFT to illustrate the variation of trust-wide patient experience work across our sample of study sites and also the multiplicity of data (i.e. that any item we call ‘data’ is, in fact, made up of a multiplicity of instantiations and translations that can look rather different from one another). Here we look at how the FFT triggers action for care improvements. Of course, we appreciate that the FFT is simultaneously embedded in a number of other projects, including performance management, benchmarking and patient empowerment. Here, in line with the aims of our study, we focus, in particular, on the ways in which the FFT is involved in interactions that result in improvements in patient care.
From our observations and interviews, the FFT, which is mandated nationally, proved a relatively costly and often demanding undertaking. At four of the five trusts in our study (all but trust D), we saw that considerable staff effort went into the generation and analysis of the FFT results. From a national policy perspective, the FFT recommendation rate (i.e. ‘98% of patients would recommend this hospital’) was seen as important. Several senior trust staff told us that health ministers like this metric and occasionally (and during the period of fieldwork too) trusts receive letters from the Secretary of State for Health and Social Care congratulating them on their scores. However, at all trusts, the members of staff directly involved in patient experience work often found the recommendation rate itself the least interesting of the FFT-related information. Unless this showed significant drops or increases, which was rarely the case (and certainly was not the case during the period of our observations), it was of little use to improvement work.
At the trusts where the FFT was considered an embedded data format that had, over time and with great effort, been made relevant to the organisation, patient experience officers and managers saw the strength of the test in its numbers: FFT results, or recommendation rates, based on high volumes of responses (trusts and services within trusts vary in terms of target response rates) provided a general indication of how things were going. They were a ‘blunt’ instrument that, in the local data landscapes, helped the organisation keep a finger on the overall pulse of patient experience. The limited value of the recommendation rate result was partly because staff across all trusts considered the main FFT question (‘How likely are you to recommend our ward to friends and family if they needed similar care or treatment?’) to be poorly worded and very confusing for patients (who were said to often comment that nobody would wish a friend or a relative to need hospital treatment), making the interpretation of responses either difficult or unreliable. In addition to this, some staff members reported that, when the FFT cards were handed out to patients at the point of discharge (cards cannot be sealed; and therefore the responses are visible), they saw the responses as even less indicative of patients’ actual experiences.
As a ‘recommendation rate’ result, the FFT connected with different actors, usually via reports and dashboards. The usefulness of the recommendation rate, in itself fairly limited, was seen by patient experience team members as directly connected to the response rate: the strength of the FFT, in this sense; was in the numbers; as one respondent put it when referring to the trust’s average FFT responses for 1 month:
. . . if 99% of 13,000 people say they’d recommend us, we must be doing something right!
Field notes, 9 June 2017, trust A
The significance of the recommendation rate was also dependent on its relationship with other data, something that some members of staff referred to as the ‘triangulation’ of data at one trust, or as ‘deep dives’ at another, where the FFT data were included in documents that saw them placed alongside safety or staffing data to allow a ward or service area to be better understood. In the case of the FFT overall recommendation rate, the fact that it was looked at as one element in a complex data landscape that included, for example, the results of the National Adult Inpatient Survey or those of the local bespoke inpatient survey, meant that the information, although not terribly meaningful in itself, contributed, nonetheless, to generating a picture of patient experience that was valuable to the trust. A high response rate and high recommendation rate in the context of encouraging data from other sources provided reassurance that overall care quality, with particular reference to patient experience, was satisfactory. A dropping recommendation rate prompted comparison with other data (experience data, but also safety and outcome data, staffing data and any other relevant information for a particular time period) to help staff understand where problems might be arising.
The open comments were considered the most useful aspect of the survey, although still rather unsatisfactory because they were anonymous; patients had the option to leave their contact details but rarely did so. At all trusts, we heard from front-line staff that it was difficult for the organisation to do something about an individual’s poor experience if they could not identify the individual in question. In this respect, complaints were seen as more valuable forms of commenting on experiences of care, because they meant that the organisation could address the specific situation and hold to account the specific people involved.
The FFT open comments either allowed or, in some cases, demanded that staff do something about them. This was particularly the case for negative comments. Talking about the negative FFT comments, the matron for elderly care at trust E commented:
Oh you hear about them straightaway – they come flying down. But when I see three negative comments about one ward in a short space of time, I do think ‘oh hang on, is something going on?’ And I’ll let the deputy director of nursing know what we’re doing about the negative comment, over the phone or at our monthly one-to-one meeting, and she’ll say ‘just pop what you’ve said to me in an e-mail so that if anyone from the Executive asks, I’ve got it recorded’.
Field notes, 15 March 2017, trust E
At all trusts except trust D, the FFT was predominantly paper-based and comments were usually reviewed weekly or fortnightly by staff (typically the ward sister/manager, but also members of patient experience teams) and any outstanding issue that could be addressed (e.g. a complaint about toilet cleanliness) would be addressed or escalated appropriately. At trust E, for example, free-text comments marked as ‘negative’ by the staff transferring the information to the electronic system were reported to the deputy director of nursing, who then contacted the matron with responsibility for the ward where the comment was reported, asking the matron to look into it. At trust A, members of the patient experience team to whom the cards were delivered for processing acted as a further point of action for the FFT comments. The positive comments were used to disseminate examples of good practice widely (e.g. via Twitter posts), and negative comments could prompt further action via the divisional director of nursing responsible for patient experience and the close links she had with the team. The divisional director of nursing explained:
It’s real time, so if [name of patient experience team member] ‘Oh gosh, I found something really awful’ she can, they can tell us straight away. So if [patients] say ‘Oh I was in x ward and the toilet had been blocked the whole 2 weeks I was in there’ because it’s meant to be in, within 48 hours, you can follow that up quite quickly you know, oh that ward’s had a blocked toilet or a toilet not functioning. Ring the ward sister or the matron and we can sort something out ‘cause it’s real time. So that bit’s really quite useful.
Interview 102, trust A
According to this divisional director of nursing, investing resources for processing the FFT within the trust meant maintaining it was a nearly real-time exercise:
. . . say one of the trusts down the road may use Picker to do it, so it all goes back to Picker. They don’t see it in real time. So, [name of patient experience team member] downstairs can say to me ‘Toilet’s blocked’. If it’s gone to Picker or another company or somewhere else or it’s done automatically online, you don’t necessarily see it unless you’ve got someone in there.
Interview 102, trust A
In some cases, when patients did leave their name and contact details, a FFT card could be treated as an informal complaint and triggered the organisational procedures for dealing with complaints (which at all trusts, except trust B, were handled by a separate team).
The interactions briefly covered here highlight the different ways in which actions to address substandard care can emerge as a result of interactions with different ‘versions’ of the FFT, but also how (as discussed in the previous section) each ‘version’ of the data really is a different entity to different people. For a matron, for example, the FFT data exist as the comments for one of her wards, and not necessarily as the numbers associated with the recommendation rate. However, for the board of directors, the FFT data may exist only as the response rate and recommendation rate. It is virtually impossible to conflate these different manifestations of the FFT as a unitary technology of governance. Furthermore, as will be argued in Chapter 8, this means that the idea of the FFT as a ‘thing’ that works poorly or well enough or could be improved does not really capture its salient features, those that enter associations with actors and that produce work aimed at ensuring good experience of care.
As discussed, different ‘versions’ of the FFT were made to work towards improving care in different ways. However, we also saw that these lines of action could be inconsistent across different clinical divisions generating these data. For example, at trust A the FFT was considered a useful form of real-time feedback that allowed the possibility of immediate responses to ward issues; however, this was not the case for all wards or all members of staff interacting with the cards on the ward (including for the clinical areas of cancer care and care for patients with dementia). It was possible for the FFT comments to be more valuable for one ward than for another, and to carry more or less weight for different members of staff. Although the examples we illustrate in this report focus primarily on exploring the characteristics of instances in which patient experience data link to action for improvement, we remain aware that there are plenty of cases in which these links fail to materialise.
Having illustrated how the multiple nature of patient experience data can lead us to notice how different ‘versions’ of the FFT interact with different actors and produce different types of effects, we now discuss some of the interactions other types of data participated in and their effects at the study trusts.
Compensating for the flaws of patient experience data
Trust staff who deal with various forms of patient experience data recognise that such data have flaws and limitations. For data to be meaningful and do what they are intended to do, their design and use need to evolve, and ideally improve, alongside the evidence supporting their validity and scope. Nevertheless, in examining patient experience data practices, important information can be drawn from looking at the ways in which people who interact with the data work with or around the data’s flaws and limitations. Like the FFT, which is the subject of much criticism, the NCPES is commonly regarded across the five study sites as having several limitations that reduce both the validity and the utility of the data provided. Across our interviews and observations in cancer services, we came across three recurrent issues: (1) timeliness issues, (2) inclusion criteria and (3) grouping mismatches.
Timeliness issues: The NCPES was criticised for its lack of timeliness by cancer managers and nursing staff on two grounds: (1) that there was too long a period (between 12 and 14 months) between surveying a patient cohort and reporting the results to trusts, during which time issues raised would already have been addressed (this was also an issue with the national adult inpatient survey) and (2) that there was, conversely, too short a period (4 months) between the communication of the NCPES results and the surveying of the following year’s cohort of patients, meaning that the impact of the improvement initiatives taken would not necessarily be reflected in the following year’s survey results.
Inclusion criteria: Nurses criticised the NCPES because it included only patients who had an inpatient or a day-case stay. As a consequence, and because tumour-group specific patient experience data are provided only if a tumour group receives > 20 responses, those tumour-group teams that tend to see large numbers of outpatients do not receive usable data relevant to their specialty. As one skin CNS from trust B said:
For me, it [the NCPES] has no relevance whatsoever. Because most – the majority of – skin cancers here are treated as outpatients. And the national survey – it’s an inpatient survey. So any of my patients who require inpatient surgery are transferred to [another trust in the regional Cancer Alliance]. So I get no reports about that at all.
Interview 008, trust B
A lead cancer nurse at another trust noted this for another tumour group:
So in upper GI [gastrointestinal], for instance, the patients that are targeted in the national cancer patients are only patients that have surgery. Now that’s less than 25% of the whole cohort of patients with oesophageal and gastric cancer, so that’s 75% of patients that are not being targeted at all.
Interview 002, trust D
Grouping mismatches: The NCPES tumour-group categories do not match the trusts’ own categories of work or care. CNSs reported that this also undermined their ability to use the data and could be demoralising for staff. A urology CNS who headed a team caring for urology and prostate patients at trust B said:
. . . [the NCPES was] not overly useful for urology at [this study site] because the survey responses are split into general urology and prostate and also you have to have a certain number of prostate patients [in order to be recorded]. It can be just really disappointing because you know how hard we work and you look and you think ‘oh there’s nothing in prostate again’.
Interview 003, trust B
Similarly, a head and neck CNS at trust E did not have confidence in the validity of the lower scores received for ‘head and neck’ because the NCPES combines head and neck cancer with thyroid cancer, which are in separate teams at her trust:
Because the national one is head and neck and thyroid joint, a lot of the negative stuff we get from the thyroid team which isn’t anything to do with us. [The NCPES] just clump[s] them together, they always have historically. [But] here it’s a completely separate service. Completely. Different MDT [multidisciplinary team]. Different specialist nurse.
Interview 007, trust E
The limitations discussed have a range of repercussions. They can generate cynicism and scepticism towards the meaningfulness of the whole exercise, as in some of the comments above, but they can also have positive effects that lead to compensatory actions to address in-built flaws and limitations. Here, maintaining a focus on our study aims, we describe the ways in which the survey’s limitations can lead to the generation of additional data and more immediate action for improvement stemming from it.
One effect we observed of the NCPES’ limitations is that cancer CNSs work to mitigate the limitations of missing tumour groups and outpatient opinions through instigating their own patient experience data initiatives, such as locally designed, tumour-specific patient surveys. As one lead cancer nurse commented:
Yeah, I think the [NCPES] has its good points but I think it can be up for a lot of criticism because I think the sample of patients is often not great because it only targets inpatients, it doesn’t target outpatients and quite often these days, you know, patients who have significant treatment are not necessarily inpatients. So a lot of the palliative patients won’t have an inpatient record. So this is why a lot of the specialist teams here did their own local surveys to get an overall feedback and not just a kind of narrow band of patients.
Interview 002, trust D
The head and neck cancer CNSs from trust E produce their own local patient experience survey, which mitigated some of the negative factors they perceived the NCPES to have. Thus, their local survey asks only about the patient’s experience of the CNS team in the acute setting rather than that of the whole patient pathway and of the entire MDT, which is often a very large team the in the head and neck department. They reported that the satisfaction rates were higher in this specialised survey than in the national one, in which head and neck cancer was combined with thyroid cancer, and that patients did not mention that the service did not meet their expectations:
It’s because it’s just about us [i.e. nurses] . . . our patients like us.
Interview 007, trust E
A striking example of how trust staff ‘plugged the gaps’ in the NCPES was provided by the work of a skin cancer CNS from trust D. In 2014, this CNS’s managers (senior nurses) suggested that she develop and carry out a local skin cancer patient survey; she explained that, at her trust, organising a survey of patient experience is seen as part of the CNS role. In addition, as reported above, skin cancer at this trust was largely an outpatient matter and, as such, their patients were not surveyed using the NCPES; this was another reason for initiating a local survey. After receiving guidance from her managers, cancer services and the trust’s quality improvement team, she wrote the questions and distributed the survey to patients. Many of the questions were adopted or adapted from the NCPES. The local survey runs annually and is administered by post; patients return completed forms using prepaid envelopes. The CNS said that, although the most recent results paint a ‘rather rosy picture’, things had been very different when the survey first started: ‘the results were far worse, really quite terrible’. Importantly, the survey revealed aspects of poor care of which she and her colleagues had been largely unaware: patients receiving inadequate information about pathology and treatment or being told that they had cancer at inappropriate times and places. The results of the survey have improved greatly since then and the CNS attributed this to the action taken in response to the poor survey results. She highlighted new systems for communication and the development of new leaflets that signpost relevant services and reliable sources of information, as well as the reforming of the patient support group. The work continues: in response to the latest survey results and a few informal complaints, the CNS and the lead dermatologist had decided that they would offer additional training to registrars in the issues patients may wish to discuss when they are first diagnosed with cancer.
We came across many examples of cases in which formal tools to gather feedback from patients failed to directly illuminate actual experiences of care but triggered instead meaningful action to make up for the flaws of tools. In particular, we heard at all sites that the most common form of feedback staff acted on was the feedback that they heard directly from patients or carers. Nurses ‘keep an ear out’ for staff or patient conversations and then ‘nip issues in the bud’ as they arise. This was certainly not regarded as ‘data’ and was rarely thought of as ‘feedback’. There were also instances when this ‘listening out’ over a longer period of time led to more radical changes to service provision. In Box 3, we illustrate one example of this process, which highlights how more conventional forms of patient experience data failed to identify the issues at hand.
A uro-oncology CNS at one of our study site trusts told us about the care of metastatic prostate cancer patients who were being administered two particular drugs. These drugs need to be frequently prescribed at set intervals and patients are required to come into the clinic to receive their next round of medication. Patients also need to undergo frequent blood tests in primary care before attending outpatient clinics. However, not all medical staff used the same letter template for communications with GPs and this led to significant divergences in the experience of care. These various elements required significant co-ordination to offer good care for patients. As the CNS was the main point of contact for these cancer patients, she received telephone calls from them during which they expressed concern that, although they were running out of their medication, a follow-up clinic appointment had not been arranged. There were also delays with blood tests because of a lack of communication between hospital doctors and GPs. The CNS did not see this information from patients as ‘feedback’, ‘complaints’ or ‘data’; it was just patients talking about their problems. She said that these issues had not been identified by the NCPES, the FFT or the local urology patient questionnaire.
Her position as a CNS in a MDT meant that she also attended business meetings with the urology outpatient department matron and the clinical director; she was therefore aware of the pressures on clinics and the reasons why patients might not be receiving follow-up appointments after attendance. She was also aware that, from a financial aspect, it would make sense to ‘free up’ medical time and to run nurse-led clinics for these patients instead. She was thus able to present her proposed changes as a service improvement.
The CNS’s nurse-led clinic for these metastatic prostate cancer patients addresses some of the issues patients faced: the CNS gained access to the appointment system and now makes appointments for patients. She also drafted a standard letter to GPs to avoid delays in blood testing. She reported that patients liked the new arrangements and she was conducting a survey to collect this feedback and test whether or not the nurse-led clinic is an improvement from the patients’ point of view. She lamented the fact that she had not carried out a baseline survey before the changes to ‘scientifically’ track their effect.
GP, general practitioner.
This example makes three inter-related points. First, it shows how information not regarded as data can lead to improvements in care despite the presence of more formal mechanisms for collecting and acting on data. Second, it shows how different conceptions of ‘what counts as data’ and ‘how data can or fails to lead to improvement’ weave in and out of staff narratives about acting on information from patients. Finally, the CNS’s desire to conduct a survey also shows how new sources of ‘data’ (as understood by trust staff) emerge in response to a situation in which relevant existing data fail to point to issues that require addressing.
We also want to draw attention to the fact that all the cases presented here describe the ways in which a specific nursing role, namely that of the CNS, interact with patient experience data and lead to improvements in care. The work carried out by CNSs to compensate for flaws and limitations in the patient experience data available to them led us to pay particular attention to the qualities characterising the interactions between them and patient experience data. We identified three specific qualities that were present wherever these interactions were clearly linked to improvements in care. These qualities are (1) autonomy, (2) authority and (3) contextualisation. We discuss these qualities in the following sections and provide examples that illustrate how very different actors can be involved in bringing them about.
The qualities of interactions with patient experience data that make a difference to care improvements: autonomy, authority and contextualisation
Our study found three key interlinked qualities of the interactions between social actors (people, objects, data, systems, processes) that made a difference to care improvement: (1) autonomy (to act/to trigger action), (2) authority (to act/trigger action and for action to be seen as legitimate) and (3) the ability to contextualise information (to act meaningfully in a given situation).
Following on from our discussion around the work carried out in cancer services at different trusts to compensate for flaws and limitations in patient experience data, in this subsection we stay with the work of CNSs to illustrate what these qualities are in more detail (see Example 1: clinical nurse specialists’ work on patient experience data). These qualities, however, did not emerge only in interactions between CNSs and patient experience data. We also traced the way in which they emerge, or fail to emerge, in the work of other actors (both human and non-human), which aim to link data to improvement. In Example 2: authority, autonomy and data contextualisation in and through ‘learning sessions’, we look at patient experience ‘learning sessions’ at trust D; in Example 3: integrated quality – the authority, autonomy and contextualisation conferred by a ward accreditation system, we present how a ward accreditation system effectively integrates data and improvement at trust D; finally, in Example 4: authority, autonomy and contextualisation and patient experience teams, we discuss how the qualities of autonomy, authority and contextualisation are present in the varied work of patient experience teams across the five study sites.
Example 1: clinical nurse specialists’ work on patient experience data
Here we look in more detail at the human actors, cancer CNSs, who in all five trusts consistently displayed the qualities of authority, autonomy and contextualisation in relation to patient experience data work. Cancer CNSs provide specialist care and support to cancer patients. They are the patients’ main contact in cancer services and act as patient advocates in MDTs throughout the pathway from diagnosis to recovery. In their everyday work, cancer CNSs design local surveys, and collect and analyse patient feedback (both formal and informal) and are expected to make changes to services accordingly. In our conversations and interviews with CNSs at all trusts, these responsibilities were seen as part of their role.
For instance, at trust D, uro-oncology and renal CNSs designed and acted on patient experience surveys as an essential, formal part of their professional development assessed by a yearly review with their manager. At trust C, lead CNSs in each tumour group were required to present monthly progress reports to the head of nursing for cancer on improvement projects, which were substantiated by various forms of patient experience data.
This is not simply an inherent feature of the professional role of the cancer CNS. Other actors play a part in giving CNSs a recognised place in patient experience data and improvement work. For instance, these activities are encouraged as a component of Macmillan’s Annual Report for Macmillan-badged nurses. Cancer services (e.g. breast cancer, prostate cancer) with Macmillan-supported nurses are asked by the charity to complete an annual report, one section of which deals with documenting patient feedback and how it is being acted on.
The infrastructure represented by organisations external to trusts, for example Macmillan, Quality Health with the national NCPES Executive Summary, and the Cancer Dashboard, reinforces and sustains the authority of CNSs, promoting their professional identities. In addition, a CNSs’ ongoing, long-term contact with patients, combined with their specialist nursing knowledge of tumour-specific pathways, enables them to understand how best to act on any patient experience data they come across to improve care.
The specific work that cancer CNSs are required to do, together with the infrastructure through which this work is performed, places the CNS at the heart of patient experience work in cancer services. By focusing on the work that CNSs do with and around patient experience data, we were able to identify at least three qualities that were enacted in their interactions with the data. As we mentioned earlier, these were autonomy, authority and contextualisation. We noticed that these three qualities were a feature of interactions with data that could more clearly be linked to improvements in care.
In acute care settings, CNSs exist in various specialties. Given the particular focus of our research looking at both cancer and dementia services, we examined the work of dementia CNSs in relation to patient experience data and care improvement. As we will see in the following paragraphs, the difference in infrastructure compared with cancer services has a significant effect on how and to what degree the qualities we identified can and do emerge.
We saw in Chapter 5 that what counts as patient experience data in services for people living with dementia is problematic because much of the data work is centred on carer experience. Nevertheless, we found that dementia care-specific professional roles possessed the qualities described above and acted to generate information that they themselves could link to improvements. In one of the study trusts (trust E), this role was again that of CNSs, who formed a small team of four. They had the clear aim, among others, to support patients in the early stages of their diagnosis of dementia. This group of patients had been identified as being relatively neglected, in that they were neither in an acute setting nor being followed up in primary care. They were therefore being offered very little support in the early days of their diagnosis. The team organised clinics and activities aimed at identifying and supporting people in the early stages of dementia-related cognitive impairment and their carers, and at gathering as much information as possible about them. This information, and the positive relationships established, would be used in the probable event that these patients would be admitted to hospital in the future, possibly with more advanced cognitive deterioration. These CNSs were acting pre-emptively, in anticipation of any problems arising. We found that, although these nurses did, on occasion, solicit formal feedback at their weekly clinics and involve themselves in trust-wide feedback activities (such as the one-off carers’ survey we illustrated in our Chapter 5), they did not seem to use this feedback in any specific way; rather, they appeared confident in the knowledge they were getting from the patients and the carers themselves through the activities they organised about what their needs were and how they might try to meet these needs on admission to hospital.
Comparing the work of the dementia and cancer CNSs in relation to patient experience data and improving care is instructive. Cancer CNSs operate in a richly populated landscape including national strategies, Cancer Alliances, Quality Observatories, national patient experience surveys, the national peer review, cancer treatment targets, clinical audits, Macmillan, MDTs, cancer services administrators and managers, other cancer CNSs and nurse-led clinics. It is through sites of interactions involving these, and other entities, that cancer CNSs in certain circumstances gain, and continue to gain, the qualities of authority, autonomy and the ability to contextualise data (contextualisation). Although a landscape of national audits and well-established charities (e.g. the Alzheimer Society) does exist in the case of care for people living with dementia, it is not as rich and as tightly woven as it is for cancer care. In addition, dementia CNSs are relatively uncommon (we encountered them at only one of our study sites); thus, their remit and competences may be less clearly recognised than those of cancer CNSs. It is possible that the combination of these elements contributes to making the challenging work carried out by dementia CNSs less visible; specifically, the work carried out to link patient experience data to action for improvement. Nonetheless, the qualities of autonomy, authority and contextualisation still characterised the work of these dementia CNSs in their ability to link what they knew about patient and carer experience and the improvements they put in place in anticipation of care being needed by potential future patients.
During our fieldwork, we were struck by the singular way in which cancer CNSs across the five trusts worked with patient experience data to improve care. It was through a consideration of their work with data, the interactions of which they were a part and the types of relationships they formed, that we saw the three key qualities (autonomy, authority and contextualisation) emerge most clearly and obviously. Our examination of the work of dementia CNSs (present in only one of the five trusts at the time of fieldwork) shows that the qualities are not only specific to one type of CNS. Indeed, what we want to emphasise is that the qualities are not an intrinsic, essential feature of particular roles; rather, they emerge through an ongoing series of interactions between human and non-human actors.
Following on from this basic premise, these qualities might equally pertain to interactions with entities such as processes, teams and infrastructure. In the following three examples we offer a sense of what these interactions might look like and how they might be linked to care improvements.
Example 2: authority, autonomy and data contextualisation in and through ‘learning sessions’
At trust D, senior staff responsible for patient experience organised ‘learning sessions’ for wards and service teams. Run on a rolling basis by division, learning sessions looked at how staff worked to improve patient, carer and family experience and how they intended to act on patient feedback in the coming months in order to improve experience further. Each learning session lasted for up to 3 hours and involved representatives from each ward or service in that division or subdivision. Representatives might include staff nurses, sisters, ward clerks, matrons, health-care assistants, Allied Health Professionals or cleaners. Wards and services typically sent one, sometimes two, members of staff. Pressures in some areas meant that releasing staff to attend a learning session could be challenging for wards but they were, nevertheless, largely well-attended. Other staff attended as facilitators or observers: they included the lead nurse with responsibility for patient experience, the trust patient experience director, other senior managerial nurses and a member of the trust’s quality improvement team.
The format for each patient experience learning session was the same. In advance of the meeting, the wards each sent the chairperson (a senior nurse) a ‘storyboard’. The storyboard is a PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation that lists three pieces of patient feedback and the corresponding three actions (‘tests of change’) the ward team took to address the feedback. Recently, the learning session steering committee have added two additional columns to the storyboard: how the team measured the effect of the change they implemented and what was the impact of the change. The patient feedback can come from any source: the FFT, the internal trust-wide patient experience questionnaire, local surveys and one-off informally communicated feedback about a particular patient’s care (although there were some members of the patient experience team who disliked wards relying on this last type, a point which we explain below). Each ward or service presents their storyboard and then answers questions from the chairperson and other facilitators. After each ward has presented their storyboard, participants are asked to identify which, if any, of the tests of change they would like to try themselves; they are also asked to identify patient feedback that they want to work on and share improvements at their next learning session. At some learning sessions, a member of staff from the quality improvement team gives a brief presentation on QI methodology such as tests of change and PDSA cycles.
In Box 4, we illustrate a part of the learning session process with an example from a session for acute medical wards that we attended in February 2018.
Two staff nurses from a high dependency unit presented their ward’s storyboard. The staff had received comments from patients and their families that patients were experiencing discomfort during non-invasive ventilation administered through a BiPAP mask (a system for delivering pressurised air at two different pressure levels, one for inhalation and one for exhalation) and were struggling to communicate their needs. One of the nurses explained to the learning session that, from their point of view, they recognised that the BiPAP was uncomfortable but took the position that because it is a life-saving treatment, there was little they could do to improve matters. However, as the nurse reported to the meeting, in the light of these informal comments some staff were curious to know more about how patients experienced the BiPAP and to capture this more formally. Thus, the unit’s test of change was to produce and distribute a patient questionnaire specifically about their experience of this form of treatment. The chairperson of the learning session asked the nurses whether or not they had learnt anything new from the survey results. The nurses replied that it taught them to listen to patients and to not assume that they were confused because of their high carbon dioxide levels. The chairperson probed further into the process: ‘what are you doing differently as a result of the questionnaire?’ ‘It’s raised awareness among staff about these issues’, the nurse replied. ‘And what about how you’re addressing the communication issue?’ ‘We use the families to learn about how to interpret what the patient wants (because wearing the mask, the patient can’t speak) and we explain the consequences of the mask on admission rather than when the treatment begins so that we can set up a system of communication signs with the patient before they put the mask on’ (field notes, 27 February 2018, trust D). The nurses end by telling the group at the learning session that this is working better despite still receiving comments that the mask is uncomfortable.
BiPAP, bilevel positive airway pressure.
This example shows how nursing staff on a ward developed a patient survey in response to informal comments to understand patient needs better and work towards ways of improving communication and care. As researchers, we have not observed these changes on the ward ourselves or spoken to ward staff about them. However, we draw attention here to the way in which various forms of patient experience data are mobilised by and through the learning session as a site of interaction. The learning session gives autonomy and authority to staff: although staff must produce storyboards in a prescribed format (‘patient feedback’, ‘tests of change’), all the other elements of the process work to emphasise ward staff’s ownership of, and responsibility for, both the patient feedback and the steps taken to respond to it. Having to present to others in the context of a ‘learning session’ (mandated and organised by ‘higher ups’) and to answer questions from other participants bolsters the nurses’ ability and authority to speak on how data and improvement are linked. The chairperson’s questions do not demand that staff do things a certain way; rather, they tease out information that demonstrates the nurses’ ability to contextualise data. In the interactions enabled by the learning sessions, the qualities of authority, autonomy and data contextualisation emerge.
We draw attention here to another aspect of the learning sessions we observed: as ‘patient feedback’, several wards and services used an individual patient’s case and the comments received from that patient or patient’s family as the basis for their test of change. In steering group meetings following the learning session, some of the facilitators expressed concern about this: for them, good patient feedback data were those that demonstrated a pattern aggregated from many pieces of data over time, rather than a one-off individual situation. These facilitators suggested that some staff would benefit from more guidance about how to do this and how to present it at future learning sessions. During the learning sessions, however, this lack of aggregated data did not hamper the ability of data to lead to improvements in care, as presented in teams’ tests of change. This echoes our findings about how cancer CNSs respond to and mitigate the flaws in the NCPES because of the qualities their roles possess; here, ward and service teams’ ability to act with authority and autonomy and to contextualise even ‘imperfect’ patient experience data means that improvements in care are possible and are recognised (through the presentations at the learning session) to have taken place.
The patient experience learning sessions at this study site are embedded in other trust-wide quality processes, including a ward accreditation scheme. We examine this scheme in more depth later in the section (see Example 3: integrated quality – the authority, autonomy and contextualisation conferred by a ward accreditation system); however, just as the cancer CNSs gained authority by association with entities such as Macmillan Cancer Support and the national Cancer Dashboard, the patient experience learning sessions in this example gain specific qualities through association with the trust’s ward accreditation infrastructure.
Example 3: integrated quality – the authority, autonomy and contextualisation conferred by a ward accreditation system
Trust D did not have a formally designated ‘patient experience team’ as conventionally understood by other trusts, that is non-clinical administrative and clerical staff who collect, collate and communicate patient experience data such as the FFT. Instead, the trust relied on a small team of largely senior nurses who worked closely with colleagues from quality improvement to provide support to wards and clinical services to improve patient experience in response to data. They did this principally through managing two related mechanisms: the learning sessions we presented earlier (see Example 2: authority, autonomy and data contextualisation in and through ‘learning sessions’) and the ward accreditation system. We describe how patient experience data features as an integrated part of this system and how it gains particular qualities as a consequence.
Trust D’s ward accreditation scheme has been operating in its current form since 2008 and consists of 13 standards. How wards perform on patient experience matters is not simply one standard but runs through the scheme and consists of different elements, which are demonstrated by various sources of data. These sources include staff participation in patient experience ‘learning sessions’ (see above) as well as direct questions to patients and staff by the trust nurse who manages the ward accreditation scheme. She records what patients report about noise at night, food, cleanliness, privacy and dignity, and communication with ward staff. Other types of data are the local inpatient survey, which is administered in most areas through the bedside television screens. The scheme also takes account of whether or not the results of the local inpatient survey, particularly responses to the question ‘Overall how would you rate the care you received on this ward?, are regularly accessed (ideally weekly, but usually monthly) by the ward manager and displayed in public areas on a prescribed ward information board (the senior nurse checks, through software, how often ward managers download these data). Accreditation processes are not seen as removed from the ordinary work of the ward: the senior nurse makes herself aware on an ongoing basis of how wards work and the challenges they face. She gives advice to individual ward managers and organises QI involvement or recommends training in order to drive improvement. Thus, the accreditation is not necessarily a simple ‘snapshot’ in time by distant external figures; the central role played by this senior nurse and a small team of nursing and other colleagues means that it is a process of improvement built on contextual awareness on the part of the accrediting authorities. Although participation in the ward accreditation process is compulsory, wards have considerable autonomy in how they respond to patient experience data, as we have shown in our discussion of the learning sessions earlier. In the ward accreditation system processes, patient experience data appear alongside other types of data, such as safety and clinical outcomes, in ways that make it very difficult to separate patient experience as an independent item. The scheme has four ratings and the frequency of inspections for accreditation decreases as a ward improves. Once a ward has achieved the second highest rating for three consecutive rounds of accreditation, it becomes eligible for the trust’s highest rating. This engages another process, which determines whether or not the nominated ward should indeed receive this elite rating. For the purposes, of the report, we refer to this elite rating as ‘gold’. A Gold Panel is convened, which recommends to the trust board whether or not to award gold status to the ward under consideration. This is a high-level panel, facilitated (but not chaired) by the senior nurse in charge of the accreditation scheme; it includes the director of nursing, the medical director, non-executive directors and senior nursing and non-clinical management and patient representatives. The Gold Panel asks the ward to produce an application for gold status. This document (called a ‘pack’ by staff) follows a standard template organised in 17 sections and contains data and information about safety, staffing, finances and patient experience. In this document, prepared by the ward manager in collaboration with members of her team, patient experience data exist as discrete entities. Thus, there is a section on ‘patient feedback’, which asks for the local inpatient survey or the FFT results; a section on complaints; and a written testimony from a patient (there are other testimonies from a consultant and an allied health professional). Patient experience data may also feature in the section on ‘QI involvement’. Box 5 provides more detail about the pack and the Gold Panel process.
In one ward’s pack, the ‘patient feedback’ section focused on seven questions of the local inpatient survey, showing the results for these questions over 10 months with explanations for better or worse performance, along with a table listing the actions taken in response to these questions. These graphs and charts were followed by a page with scanned photos of thank-you letters, which the ward manager had asked patients and relatives to produce especially.
Pulling all of the data together into the ‘pack’ requires considerable work and co-ordination. The process encourages ward managers to learn about data sources within the trust, whether those are about patient experience or finance. One ward manager who had been through the Gold Panel process remarked:
I think with this, you know, just collaborating [on] all the information and getting the information from other people is helping me. I think that, you know, I’ve managed to find who I can, you know, get the information from . . . who should I e-mail, you know, who would help me? So who would give me all this information? So all those things. You know, I document their names, who I e-mailed, you know, because I have like a, you know, like who is the person involved in this, who is the person involved in that. So at least I’ve gathered that information and then, in the future, then it’s not hard for me to gather information again.
Interview 008, trust D
The pack should marshal these various forms of data into an argument for why the ward should be awarded gold status. One ward manager wanted to emphasise the cost savings, which were a result of a newly implemented system of video ‘specialling’ (or one-to-one nursing), because she was aware that the trust had made budgetary control a particular priority. The pack is not meant to be comprehensive. Rather, the information it presents is seen in the context of the other, arguably more important, elements of the Gold Panel assessment process: an hour-long ward visit by the panel; a ‘panel huddle’; a presentation to the panel by various members of the ward team, which also includes a video or an audio testimony from one or more former patients or relatives; a question and answer session with the ward team; and a final panel discussion of all of the elements, after which they produce a recommendation.
During the ward visit, members of the panel talk to patients and staff. They ask questions about the quality of care, for example when the last time was that the patient saw a nurse, whether or not the ward is noisy or clean. Others ask more open questions and let the patients or relatives speak without interruption. Panel members talk to different grades and types of staff: student nurses, health-care assistants, ward clerks, housekeepers, domestics, as well as nurses, asking them about ward leadership, how they improve practice and how they feel about staff development.
Back at their meeting room during the panel huddle after one ward visit, members said that they got a ‘positive’ feeling from patients. One member commented that ‘you can make the surface look good but you can’t do that with everyone’. Another member observed how all of the patients were smiling: ‘if it’s just polish, you’d pick it up in other ways, that is the patients wouldn’t be smiling’. Having talked to patients themselves and seen them in the context of the ward, they felt that the patients were genuinely contented: ‘there was no-one who looked particularly distressed’ (field notes, 16 November 2017, trust D). The comments and conversations around reported patient experience at the panel huddle were woven into the discussion of the flow of work of the ward, and panel members integrated their reflections on this solicited patient feedback in the context of evaluating dimensions of ward life, such as leadership, staff experience, cleanliness and safety.
The Gold Panel process carries out a lot of work in relation to how patient experience data are formed, presented and enacted in relation to assessing and improving the quality of care, as outlined above. However, we focus here on one particular instance, which demonstrates how this process promotes a particular relationship between data and improvement that creates and reasserts the qualities of autonomy and authority. At several points during the Gold Panel’s work that we observed, which related to one particular ward, the ward clerk emerged as an important figure. One member of the panel spent a considerable amount of time talking to her during the ward visit and she reported her conversation back to the panel during the panel huddle. This panel member reported that the ward clerk had established a communication tool between carers of patients with dementia and ward staff. This was in response to carer feedback, reporting that they found it difficult to communicate their needs or those of their cared-for-relatives to doctors or nurses who might not be present when they visited patients. The panel member reported that this tool had stopped issues going to PALS or complaints. The director of nursing was impressed with the ward clerk’s initiative. The ward clerk’s work featured again in the panel process when she contributed to the ward’s presentation, which followed that of the ward matrons. She discussed how the ward had worked to improve the experience of patients living with dementia, responding to the feedback about noise and light by distributing earplugs and eye masks, and highlighting work (e.g. the creation of a quiet room) that the panel had observed on their visit.
In this example, the work of the ward clerk in responding to feedback to improve care was made visible to the Gold Panel. Through the set of processes and interactions afforded by the accreditation system, in which panel members engaged with data and information about the ward in a serious, detailed and considered way, the ward clerk was recognised as having responsibility for taking action that allowed staff and patient carers to communicate more effectively. By examining and integrating a wide range of information, the panel considered her work within the overall ‘culture’ of the ward; in doing so, they made the ward clerk’s work on collecting data and acting on them to improve experience part of a broader picture, in which this patient experience work had similar status to other elements discussed.
Example 4: authority, autonomy and contextualisation and patient experience teams
In Chapter 5, we presented the main features of the composition and the organisation of patient experience teams, where they existed, at the five trusts in our study. Here we look at whether or not and how the qualities of authority, autonomy and contextualisation applied to these teams. Patient experience teams (those non-clinical trust staff who collect, analyse and report patient experience data) were not always able, or even in a position, to effectively translate data into improvements in care for patients. At some study trusts, such teams lacked the authority to act and, by being removed from clinical work, the possibility to contextualise data in the lives of patients or the work of a ward. These two qualities are linked: the inability to contextualise because of being physically removed from the direct provision of care also means that, in the eyes of staff such as nurses or doctors who do provide front-line care, they lack authority to act. Another reason is that ‘patient experience data’ (the FFT in particular) often emerge as less useful, ‘less scientific’, less clear-cut, less meaningful or simply less important than other forms of data such as safety or clinical outcomes data. Some forms of patient experience data also arouse negative emotions; for example, and as we have seen, the FFT is nearly universally disliked by front-line staff and by many senior nursing staff. In three of the five trusts (trusts B, C and E), this translates into a lower status of patient experience data and, consequently, a perceived lower status of patient experience teams.
During one of our earliest visits at trust B, we observed a discussion between two senior nurses with responsibility for quality. The nurses were making a distinction between ‘actions’ and ‘quality improvement’. One nurse said:
So the ‘big-ticket’ items, like clinical outcomes, never events, tend to be subject to QI methodology. Patient experience on the other hand tends to get addressed through ‘actions’, which isn’t necessarily a formal method as such and not in line with QI methodology. So, for instance, you get a set of complaints or comments about a particular thing on a ward. They act to change it, that’s an action. They just change that. It’s not formal and it’s not following a method. That’s not to say it’s not a quality improvement, because it is: the action was based on feedback and it’s led to a change. But it is informal as opposed to formal. It’s because we don’t know how to deal with the feedback that is informal.
Field notes, 16 January 2017, trust B
This view was echoed by the head of quality improvement at trust C:
. . . patient experience is almost an indicator of something but it’s not used as a direct measure in any improvement project [. . .] I like things in black and white, I don’t like things that are grey. Patient experience is grey.
Interview 001, trust C (our emphasis)
In both these accounts, an image emerges of ‘patient experience work’ that is qualified by the type of data that produces it. These are not neutral evaluations: QI work is high status, formal, associated with ‘big-ticket items’; patient experience data cannot be part of QI projects, because they appear as a nebulous uncertain type of thing, leading to inferior (although appropriate) ‘action’ but not ‘improvement’, which is seen as scientific and measurable and therefore having more authority.
We now present two cases from two trusts that illuminate the ways in which patient experience teams gained authority through interactions with data in particular settings. The first example relates to trust E’s processes for dealing with complaints, and the second to trust B’s interaction with a CCG.
Learning from complaints at trust E
Trust E had a sophisticated process for responding to and learning from complaints. The trust’s complaints process was co-ordinated and managed by a dedicated complaints manager working with a team of complaints investigators. This work was overseen and directed by the head of patient experience. At other trusts, complaints work by patient experience or PALS teams was principally concerned with logging complaints, identifying the correct individuals in the trust to whom the complaint should be communicated for further investigation, ensuring that complaints’ handling adhered to mandated timescales and targets, and liaising with the complainant through the process. Patient experience teams dealing with complaints do not generally involve themselves in how a particular ward, service or department might learn from the complaint or compel action to change practices; they are removed from the process of improvement of care. At trust E, however, patient experience data in the form of complaints and the complaints teams that dealt with them were enmeshed in various relationships that produced the key qualities we outlined above. We provide detail of the process we observed in Box 6.
At trust E, all complaints are considered on a monthly basis by a group of senior managers, which constitutes a standing committee of the trust board. The group is chaired by a non-executive director and is attended by the executive director of nursing, the deputy director of nursing with responsibility for patient experience, the director of quality, associate medical directors, the chairperson of the council of governors, the head of patient experience and members of the complaints team. This group receives a bundle of documents that gives details of each complaint received, how the complaint is being handled (i.e. responses received from relevant departments) and what action is being taken to change practice as a result. The members of the group go through these papers and cases, highlighting any issues and discussing certain complaints in detail. They also pick out trends (‘poor communication’, ‘privacy’) or instances that are striking or relate to other information about practice. Two of these ‘main points’ are distilled and included in a monthly ‘complaints mailer’, which is written by the head of patient experience and distributed to all trust staff. This mailer encourages staff to work towards addressing the two points that have emerged from the group’s consideration of complaints.
The complaints team also present an action plan outcome report at this meeting. They ask directorates who have received a complaint to report an action plan back to them, the implementation of which they then monitor at frequent intervals. This is the information that is presented to the group in the outcome report in the form of Red–Amber–Green ratings. At the meeting we observed, the head of patient experience noted that this requirement to report on the progress of action plans ‘creates the possibility of assessments and monitoring, although not policing’ (field notes, 7 November 2017, trust E).
Other key elements of action related to complaints are early intervention meetings and local resolution meetings. Complainants are offered these meetings by the trust complaints team so as to explore, resolve and respond to the complaint in a more immediate, comprehensive and timely way. Staff named in the complaint as well as those investigating it also attend. The meetings often reveal unsuspected reasons both for the initiation of the complaint on the part of the patient and for the unsatisfactory care provided by trust staff. As such, trust staff who have participated in these meetings find them useful forums at which learning can take place. One matron described his involvement to his senior sisters at their monthly meeting. He explained to them how, through the course of the resolution meeting, the complainant had found a consent form difficult to understand: ‘because the consent form was brought to the meeting and we all had to read it through, it became clear that it was really difficult to understand and we’re clinical people . . . It wouldn’t have been picked up on if we’d just written a letter in reply because we wouldn’t have looked at the form’ (field notes, 4 October 2017, trust E). Members of the patient relations and complaints team, and the head of patient experience, play a visible, central role in facilitating the resolution meetings and advising on how teams can best learn from them.
We found that through the interlinked processes relating to the management of complaints at this trust, the head of patient experience and his team build authority and the ability to contextualise data, something that other patient experience teams lack in many situations. Their central participation in high, board-level groups, together with their visible involvement in successful early resolution meetings, means that their authority in the eyes of nursing and medical staff is bolstered; the meetings help otherwise ‘distant’ administrators be involved in the dilemmas of front-line care work. Likewise, their engaged communication with directorates regarding action plans arising from complaints improves their ability to contextualise data and displays their oversight authority to trust colleagues. Through our ANT lens, this example shows how the authority of patient experience teams and of the data they administer can be built through association with other trust actors. The next example shows that patient experience teams and data gain authority and autonomy through interaction with external actors, in this case a CCG.
External entities: the role of the Clinical Commissioning Group on patient experience data work at trust B
At trust B, the head of PALS was able to draw on the CCG as an external source of authority to challenge and reform the way patient experience data were reported and acted on by trust staff. New reporting requirements by the CCG that commissions the services provided by this trust proved instrumental to this. The trust now must provide more detailed information and evidence regarding the use of patient experience data as part of its submission to the CCG’s quarterly clinical quality contract review meeting. For instance, one of the new requirements state that the trust must report on ‘evidence of service improvements as a result of patient feedback’. These new reporting requirements meant that the head of PALS could use them as leverage to encourage staff to pay more attention to patient experience data, such as the FFT, and to establish clear links between the data and the improvements in care arising from them. An extract of our field notes illustrates the case in more detail in Box 7.
The head of PALS told researchers that until recently (and indeed during the timeframe of fieldwork phase), this sort of information was difficult for her to learn. Although she asked matrons, ward managers and other senior staff how feedback led to changes, she rarely received replies to these requests. Moreover, the demands of her office-based duties mean that she rarely visits wards or talks to ward managers or senior nurses. However, the new reporting requirements meant that she was compelled to find out if and how improvements had occurred in response to patient experience data; in her communications with matrons and divisional nursing directors, she explained that this came from the CCG. The difference in response was striking: she received detailed information on how the FFT data had been used and how ‘we said, you did’ notices were displayed in service areas; she also received photographs illustrating new practices. The head of PALS told us that she thinks they replied because she said that the CCG was demanding this information: ‘that gave it more weight’. It has improved her level of engagement: ‘It’s great to hear what they do with the feedback because I never find out and sometimes it feels like I’m flogging a dead horse. For me, I can see something is happening about it’ (field notes, 19 June 2017, trust B).
The enhanced ability of the head of PALS to learn about the link between data and improvement and the requirement that she write a detailed quarterly report to the CCG about these issues has had other effects. In relation to the quarterly report, she found it strange that the CCG, an external organisation, should receive a more detailed report about patient experience than the trust. She has since proposed to her line manager, the quality lead and the chief nurse that a version of her quarterly report be presented for discussion to the monthly quality committee, which considers patient experience, safety and clinical outcomes. This chimes with the chief nurse’s recent calls for more information about whether or not staff are using the FFT results to improve care. Moreover, now that divisional directors of nursing and matrons are communicating their improvements to the head of PALS, they are better able to answer such questions when asked at the quality committee; they too, then, are motivated to uncover how and whether or not patient experience data leads to improvement in care in the areas under their charge.
Patient experience teams might be considered to be key actors in linking patient experience data to quality improvement. We found that, generally, patient experience teams, in contrast with other hospital staff such as CNSs, are rarely part of interactions that give them authority, autonomy or contextualisation. However, some of these qualities can be acquired through the embedding of patient experience teams’ work or projects in institutional structures regarded as having high status, as we have shown in the two cases above.
This subsection has looked in detail at a variety of interactions involving humans and non-humans in which the three qualities emerge and through which patient experience data are successfully linked to improvements in care. We emphasised that, although the work of cancer CNSs presents the clearest example of this, the actions of other entities are also able to produce these key qualities. In the next subsection, we develop this further by showing how the work around the NCPES at one trust produced new sites of interaction at which the qualities necessary for linking patient experience data to improvement emerged. The fact that a singular named form of data such as the NCPES can be entangled in new interactions in this way and cause a variety of (sometimes unintended) effects also demonstrates that data are multiple.
Enrolling social actors to patient experience data work: the National Cancer Patient Experience Survey
Activity around patient experience data can go beyond interpreting data and ‘reading across’ in order to promote particular quality improvements. We found that patient experience data can mobilise, and be used to mobilise, key actors and create or refashion relations, systems and infrastructure for collecting and responding to patient experience data. In such situations, the fact that it exists as a named, recognisable form of data (be it the NCPES or the Adult Inpatient Survey, or a patient story) around and through which this refashioning activity can take place is important. As we demonstrated in the previous section, the interactions and relations involving data can be characterised by particular qualities and the emergence of these qualities has consequences for the extent to which the data can lead to improvements in care. However, although the name of the data remains the same across these interactions (‘the NCPES’), the data partake of these qualities to a lesser or greater extent depending on the other entities involved in the interaction. Whereas in some interactions, the association of this data with other entities may bolster their authority, it is also the case that, in other interactions, the same named form of data are deprived of authority to the benefit of other actors. To illustrate this, we look closely at the example of the activities of clinical managers in relation to the NCPES at one of our study sites.
As we discussed in Chapter 5, CNSs are often charged with creating action plans in response to the NCPES results for their tumour group. This tends to be the extent of the prescribed activity and CNSs and MDTs have considerable autonomy in managing their response. In some trusts, the lead cancer nurse provides guidance and oversight of action plans or formulates additional cancer-wide actions. The lead cancer nurse at one of our study sites was dissatisfied with the way in which the trust conventionally responded or used the NCPES, which was along this ‘CNS-produced action plan’ model. She suggested that this way of using the data to drive improvement seemed to have stalled because the same issues were being raised year after year, indicating to her that they were not being addressed. She wanted to make the response to the NCPES work differently by, first, involving medical and managerial staff in addition to nurses; second, communicating the NCPES more widely within the organisation, such as with the trust’s quality committee and at divisional board and matrons’ meetings; and, third, involving patients to help work on the survey’s findings. In the following sections, we examine how she and her colleagues in the cancer services team fulfilled some of these aims and what kind of qualities the NCPES gained, as a result, through these new interactions.
Enrolling physicians and other professionals in data work at trust B
As part of work to better meet national cancer performance targets, at trust B the clinical lead in cancer (a physician) and other cancer colleagues (including the lead cancer nurse) have established a new cancer delivery group composed of consultants, divisional managers, senior nurses and senior non-clinical cancer managers. These performance targets, such as the 2-week fast track from referral to treatment and the 62-day cancer patient pathway, are nationally monitored and breaches of these targets entail financial and other penalties. The lead cancer nurse presented the results of the NCPES during the first meeting of the group, which was dominated by a discussion about how to eliminate the 62-day target breaches. The meeting identified the lack of a CNS in one tumour group as a cause for breaches in that specialty. As the cancer services manager said:
We don’t have a CNS batting for the patients . . . We need to be picking [patients] up individually off the list and saying this person needs to be booked into clinic . . . That we haven’t had a CNS for seven months means that we haven’t been able to do this.
Field notes, 17 October 2017, trust B
This highlighted for the group the key role of the CNS in ensuring a patient’s smooth progress along the pathway. Thus, later in the meeting when the lead cancer nurse presented the results of the NCPES and suggested that low performance in the survey in some tumour groups was the result of poor CNS staffing, the group were receptive to this idea, having discussed the impact that poor CNS staffing also has on the incidence of breaches. A consultant noted that ‘It’s the support that’s given by the CNSs which can affect their experience long-term. That’s the most important thing that’s needed’ (field notes, 17 October 2017, trust B). The lead cancer nurse then asked the clinical leads to produce action plans in consultation with their MDTs, thus challenging the assumption that this was principally the task of CNSs. At this meeting, the lead cancer nurse used the group in creative ways to achieve valued aims and enrol other professionals aside from CNSs to the work of generating action plans in response to the NCPES results. In addition, the NCPES gained in status because it was associated with the discussion of another type of data, breach data, which is regarded as more important (it was concerns about breaches and an NHS improvement evaluation that had initiated this high-level group in the first place). In other words, the mobilisation of the NCPES through this meeting served to change the infrastructure through which it worked, involving clinical staff in new ways.
Enrolling external organisations and patients in data work at trust B
As part of her work to change how the trust responds to the NCPES, the lead cancer nurse also organised a patient event around the survey (attended by 15 participants, including patients, staff, Cancer Alliance representatives and Macmillan representatives). The event was co-hosted with the regional Cancer Alliance and the initial presentation gave participants information about the Alliance as well as about the NCPES. Cancer Alliances are collaborations set up by NHS England that bring together local senior clinical and managerial leaders and practitioners representing the whole cancer patient pathway across a specific geography [see www.england.nhs.uk/cancer/improve/cancer-alliances-improving-care-locally/ (accessed July 2019) for further detail]; they are often organised by tumour group. The event followed a workshop format, with the participants split into small groups, each group containing a trust staff member. Each group discussed three questions: (1) ‘how can we improve the cancer patient experience survey results?’, (2) ‘how can we improve the support given by Clinical Nurse Specialists?’ and (3) ‘how can we improve information given at [the trust]?’. These last two issues were prompted, in part, by the findings of the survey showing some lower scores on related questions. Participants discussed these issues and patients shared their experiences of care as well as their ideas for improvement. As a result of these discussions, the lead cancer nurse learned that, among other things, patients wanted CNSs to be more ‘proactive’ in their relationship, that is to contact them without necessarily being contacted by the patient first, and to have follow-up appointments with a CNS a few days after initial diagnosis in order to understand the information provided about their treatment.
It is important that the lead cancer nurse and other trust staff who attended the event heard these specific requests. For them, the patients not only provided more context to the survey findings but also new information about how the trust’s services are experienced. We also found it significant that, whenever the NCPES was mentioned in terms of how it was experienced by patients, the conversations were fraught or indifferent in character. One patient complained that, having completed the form online, she continued to be bombarded by paper versions in a way that left her feeling harassed and made her doubt that her online response had been properly registered. Other patients were unsure whether questionnaires they had responded to were in fact the NCPES or another feedback mechanism, and others still were unaware of the existence of the NCPES.
Although at this event the NCPES was ostensibly the ‘headline act’, we identified three ways in which it seemed to serve as an enabler for other actors to gain authority and recognition. First, the event saw the regional Cancer Alliance emerge (by being a co-host) as a key site at which patient experience data can lead to improvements in care for patients: the Alliance acknowledged the limitations of the NCPES and offered support to mitigate them. One form of support consisted in the Alliance organising its own surveys of patient experience in those tumour groups poorly served by the national survey. Here the Cancer Alliance at the same time mimicked and transformed the infrastructure of the NCPES by involving patients more centrally in quality improvement work and by arguing that its surveys, designed by Alliance doctors and nurses, would be more locally meaningful. Second, by being a point of contrast, the NCPES enabled Macmillan to propose a broader understanding of patient experience data. Macmillan contrasted its work on patient experience data with that represented by the NCPES and proceeded to enact this alternative at the event itself by soliciting patient experience feedback there and then. The Macmillan representative clearly stated: ‘We’re taking a different approach. We’re taking a wider focus than just the kinds of things that the NCPES talks about’ (field notes, 30 November 2017, trust B). Third, a patient group providing experiences of care was mobilised through a discussion around the NCPES. By the end of the event, an aim was to make this patient group a key part of how patient experience work should be carried out in the future. Although the data from the survey provided an initial focus for discussion, the NCPES as conventionally understood (results, scores, benchmarking and comments) quickly receded from view. Through the event and the sets of relations enacted by patients, staff, Macmillan and the Alliance, a different way of ‘doing data’ and ‘doing improvement’ emerged, a patient experience infrastructure that set itself apart from the NCPES.
Throughout this chapter we have described detailed examples of the ways in which data, in their multiple forms, can and do lead to action for improvement. We have discussed how data participate in interactions with human and non-human actors that are characterised by the emergent qualities of autonomy, authority and contextualisation, which our analysis identified as key to the connection with improvement work. We now move to illustrating the findings from the ‘sense-making’ phase of our study, during which we worked very much in collaboration with participants from the study trusts to identify the ways in which our findings could prove practically relevant to NHS organisations.
Chapter 7 Findings 3: Joint Interpretive Forums
The second phase of our study was based on a sense-making approach. Although we are familiar with the history and the use of this term in the organisational literature,59–61 here it was used loosely to denote the process of making sense of research findings to actionable principles. We engaged with participants from the five study sites in structured workshops, modelled on JIFs, which aimed to allow for the sharing of different perspectives: that of researchers, that of different staff members from the trusts and, for the cross-site JIF, that of policy colleagues. In this phase, we could share emerging findings from the study with our participants, but also raise, and invite participants to raise, issues and questions for discussion. The overarching aim of this phase was to extract guiding principles from our data and from our preliminary findings that would be relevant and meaningful to NHS organisations. Details of how the JIFs were planned are provided in Chapter 3; here we examine the processes through which the JIFs generated additional findings and their relevance to this study.
Between January and May 2018, we ran six JIFs in total: one cross-site JIF in London (January 2018) that brought together key participants from each of the five study sites and policy-makers (from NHS England, NHS Improvement and the Point of Care Foundation charity), and five local JIFs at each study site (February to May 2018), which a range of trust staff attended. The duration and structure of the first, cross-site JIF were the results of a careful planning exercise by the research team (detailed in Chapter 3); local JIFs were tailored to the needs of trusts, and staff were encouraged to have as much or as little input as they wished. Details of the duration and the attendance of local JIFs are provided in Table 6.
JIF by date | Details | ||
---|---|---|---|
Duration (hours) | Number of participants | Participants’ roles | |
February 2018 | 3 | 13 |
Senior nurses (director of nursing, trust lead nurses (corporate, cancer), matrons) Staff nurse Ward clerk (orthopaedic trauma ward with large numbers of patients with dementia) Quality improvement team Equality and diversity manager Trust governor |
March 2018 | 1.5 | 21 |
Nursing staff of all levels (deputy chief nurses, divisional director of nursing, quality lead nurse, lead nurses for dementia and cancer, ward sisters, CNSs, staff nurses, student nurses, a health-care assistant from a care of the elderly ward) Non-clinical staff: head of PALS, cancer services administrator |
March 2018 | 2 | 8 |
Nursing staff of all levels (divisional director of nursing, lead nurse for cancer, ward sisters) Non-clinical staff (patient experience team members, patient representatives) Trust governor |
March 2018 | 4 | 18 |
Nursing staff of all levels (trust deputy director of nursing, lead nurses of cancer and dementia, matrons, CNSs) Patient experience team Directors of human resources, estates Directorate managers Trust non-executive director Research and development |
May 2018 | 2.5 | 20 |
Nursing staff of all levels (chief nurse, deputy chief nurse, lead nurses for cancer and older people, senior matron, matrons, ward managers, CNSs Non-executive director Divisional manager Head of organisational learning Patient experience team (including PALS) |
The activities of the JIFs (poster walk, provocations activity, research team presentation and discussion of the implications for trusts) and ensuing discussions provided further insight into how patient experience data translate, or fail to translate, into improvements in quality for patients in hospital trusts. Importantly, the cross-site JIF provided an opportunity for trust staff to learn about how patient experience data were handled in the other four trusts, thus enabling reflection on their own practices. In this section, we illustrate findings from the various activities of the JIFs. Our findings were the product of our analysis of field notes and of team discussions. Our aim was to identify analytical themes that proved relevant across cases while remaining sensitive to examples of significant differences. We therefore report our findings by discussing the main themes we identified for each section of the JIFs.
Poster walk
One of the principal outcomes of the poster walk was the realisation among participants that there is great variability in the way in which patient experience work is organised in different trusts. Much of this was centred around the nature, number and organisational prominence of the ‘patient experience team’ tasked with collecting and communicating patient experience data. At the local JIF for trust B, staff became aware that the trust did not have a patient experience team in the strict sense of the term. As one senior nurse said, ‘what we’ve got is a PALS team, which is something quite different’ (field notes, 8 March 2018, trust B). This was prompted by the information displayed on the posters for three other trusts, which showed that PALS work was handled by a separate team. This sparked a discussion about whether or not PALS officers ought to be doing patient experience work and the difference it made. Participants at this trust proposed that PALS worked in a ‘reactive’ way whereas patient experience was more ‘active’. One participant developed this by noting that one other trust in particular (trust C) has a specific role dedicated to ‘Patient Experience and Involvement’. The matter of naming appeared to be important according to participants in this JIF because it signalled the value placed on patient experience by the trust. By way of contrast, participants at the JIF at trust E, which has an identifiable patient experience team, noted that ‘although [trust B]’s is a PALS team, it’s really a patient experience team’ (field notes, 14 March 2018, trust E). In this example, the recognition of different practices prompted staff to raise questions about whether or not, in their own organisation, PALS and the patient experience team could work more closely together than they do at present.
Trust D did not have a patient experience team at all and had a clear demarcation between PALS or complaints work and patient experience work (see Table 3). The poster for this particular trust attracted significant attention at all JIFs. Some participants also noted the involvement of the quality improvement team in patient experience work at this trust, as well as the high degree of local control at the ward or unit level over how to change practices in the light of patient experience feedback. Given the apparent lack of central control and of a formal patient experience team, as portrayed on the poster, participants at JIFs across the other trusts were intrigued as to the mechanisms in place to ensure learning, accountability and oversight centrally and across the trust. At all the JIFs trust D’s ‘culture’, practices and infrastructure were the subject of interest and discussion. At trust D, participants were surprised to learn of the prominent role played by PALS and complaints teams at other trusts, asking whether or not this ‘reactive’ way of working was entirely desirable. They noted the absence of ‘quality improvement’ involvement in the work of other trusts’ patient experience teams and asked how other trusts knew whether or not improvements had been successful (a key focus for them was on testing and measurement). All of the questions and discussions around trust D’s unusual set-up brought to light how different organisations conceive quality improvement, its relationship with the analysis of data over time, with tests of change and outcome measures.
In addition to demonstrating the variation in practices across the five study sites, the posters caused trust staff who had hitherto been unfamiliar with the size and composition of patient experience teams to ask questions of their colleagues of how patient experience data work operated in their own organisations. This was interesting because it reinforced our findings from fieldwork of the extent to which much of this work is hidden from view.
Provocations activity
As discussed earlier, our research team carefully formulated four ‘provocations’ designed to stimulate discussion among JIF participants. These were used at all six JIFs. At trust D, a key informant also contributed an additional provocation that she wanted discussed at the local JIF. This additional provocation was then modified by the research team and used at another local JIF (trust E) as deemed relevant. Below we report a summary of the discussions of each provocation across the six JIFs.
Provocation 1: ‘national patient experience surveys are used to benchmark rather than improve the quality of care’.
-
This provocation produced much debate at a number of JIFs. Participants from trust A at the cross-site JIF were clear that they did not use national surveys, such as the NCPES, to benchmark quality of care. One participant from trust A said, ‘It’s very sad if that is what we’re doing but I don’t think it is’ (field notes, 5 March 2018, trust A). In stark contrast, participants at the JIF for trust D were unanimous in their opinion that the national surveys were a benchmarking exercise and thus of very limited use for an organisation such as theirs, which privileged local knowledge and action. One participant at trust E pointed out that the way Picker reports the results of the Adult Inpatient Survey means that benchmarking is explicitly part of the structure of the report. The same participant noted that this benchmarking structure is less helpful for trusts that tend to perform well because it can conceal issues rather than reveal them (e.g., if a trust receives a 60% favourable response to a question that is above the national average, a rather significant 40% of dissatisfied patients end up being ‘ignored’ because the rating is above average and therefore falsely reassuring). Other participants across the JIFs noted that the use of the national survey data depended on who in the organisation was looking at them. Certainly, some staff used surveys to benchmark but this did not exclude the possibility of learning and acting on the surveys. Indeed, taking issue with the premise of the provocation, one lead cancer nurse stated that benchmarking helps her identify areas of particular concern and thus contributes to improving care for patients. Others echoed this flexibility of the use of surveys while also emphasising that front-line staff were not always made aware that changes were being proposed as a result of the survey data: ‘The important thing is how is it translated across teams or services. Do front-line staff always know that changes are happening because the Inpatient Survey said X this year?’(field notes 8 March 2018, trust B). This was particularly striking at one trust-based JIF, where it became apparent through the discussion of the provocation that some front-line staff (e.g. staff nurses, health-care assistants) did not know that the national surveys existed. Participants at all JIFs repeated concerns about the timeliness of the national surveys as a major hindrance to their effective use to improve care.
Provocation 2: the NHS would lose little were the FFT abolished tomorrow.
-
At all but one JIF, we heard discussions that largely agreed with the statement. Most discussions were centred around the premise that trusts had other ways of finding out the same information and so the FFT was unnecessary. Moreover, there was considerable hostility to the wording of the FFT question. At one local JIF, one ward sister asked a senior nurse whether or not she was allowed to move the FFT card box to another area of the ward where she thought it might be used more. The senior nurse seemed surprised at this question and reassured the ward sister that she had full autonomy to place the FFT card box where she thought it would be most useful. This exchange exemplified how some front-line members of staff at this and other trusts may perceive the FFT to be very rigid. The JIF participants from trust A clearly valued the use of the FFT in their organisation, emphasising how it yields a number of responses that would not be possible through other means and how the almost real-time nature of the comments allowed the organisation to address issues quickly. At trusts D and E, which overall agreed with the provocation, there were voices that saw some value in the FFT. At trust D, where the FFT is largely administered through text messages, two matrons made the point that it was very little work for them (‘it just happens; my staff aren’t spending time pushing cards on people’ (field notes 28 February 2018, trust D) and the comments could prove useful. The complaints manager at trust E observed that the FFT was a way of improving staff morale: ‘it balances out the complaints. We can say to staff ‘Ok, we’ve had 25 complaints this month but we’ve had 4000 FFT recommends’ (field notes, 14 March 2018, trust E).
Provocation 3: it is easier to improve the experience of patients with cancer than that of patients with dementia.
-
Discussion at all the JIFs coalesced around the difference in infrastructure between cancer care and care for people living with dementia. Cancer nurses talked about the impact that national strategies, alliances, networks, targets, greater resources, specialist nurses and the support of third-party organisations (e.g. Macmillan) have on the ability to improve cancer patient experience. Participants had largely been unaware of the difference and were particularly concerned at the relative lack of attention that dementia has been given compared with cancer. At one local JIF, nursing staff involved in the care of people with dementia pointed out that they had not thought before about how cancer care may serve as a model to organise some of the patient experience work in the context of dementia care. These members of staff and the lead cancer nurse at this trust suggested they might meet in the future to discuss opportunities for dialogue further. At all JIFs, the discussion around this provocation prompted reflection and suggestions for sharing learning and practice between the two care areas with the aim of providing more attention to dementia.
Provocation 4a: patient experience data do not need to be everybody’s business.
-
In presenting this provocation, which was used at the cross-site JIF in London and at trusts A and B, we made it very clear that our statement referred to data and not to patient experience. The statement prompted discussion about the nature of data (‘What do we mean by data?’) and about which types of staff were best equipped to be involved in collecting, analysing, translating, communicating and acting on them. At trust B, for instance, one of the participants suggested that senior ward staff ought to translate the data for staff nurses, who did not need to see the data themselves.
Provocation 4b: without a dedicated patient experience team, trusts cannot effectively improve the collection of high-quality patient experience data or learn from such data.
-
This provocation was presented at the local JIF for trust D, on request from senior nurses who were interested in colleagues’ views on the trust’s lack of a formally constituted patient experience team. Participants did not seem keen on the idea of such a team. A senior nurse noted that such teams were likely to be seen as ‘third parties’ and outsiders to the work of care teams. This senior nurse also suggested that such teams lacked accountability for their actions. Another participant underlined the distributed nature of patient experience responsibility at the trust and said ‘we are all the patient experience team’ (field notes 28 February 2018, trust D).
Provocation 4c: patient experience data should be collected and analysed only by front-line staff who can act on them directly.
-
This provocation was presented at two of the local JIFs (trusts C and E) because researchers felt that it would work better than the original. The consensus at trust E was that front-line staff were too busy to do these tasks and lacked the specialist skills to collect and analyse patient experience data. However, there was a recognition that perhaps more could be carried out to improve front-line staff involvement in these activities without adding to their existing workload. Similar sentiments were expressed at trust C.
Research team presentation
At all JIFs, we (three researchers co-presenting at the cross-site JIF in London and individual researchers presenting at the local JIFs) presented some preliminary findings from our data analysis. We deliberately chose to discuss our emerging findings after participants had already begun to engage with the topic and to own the event to some extent. The presentation was organised around our early reflections on the qualities of authority, autonomy and data contextualisation, which we had noticed were characteristics of interactions with patient experience data leading to improvements in care (see Chapter 6). Our presentation revolved around the idea that, although it is important for patient experience data to lead to improvement in the quality of care, to ensure these data are the best they can be in terms of the relevance and validity of the information they generate, improving the data without paying attention to the nature and qualities of the interactions in which they are involved can produce only limited benefits. We discussed the NCPES and the different interactions survey data participated in, which turned their results into action for change. We then looked at the few, but significant, examples of data and data interaction in the context of dementia care and, finally, we discussed how some trusts seemed to be bridging the disconnect that many patient experience team members said that they perceived between their data work and the improvement work that originated from it, but which they felt removed from.
At the cross-site JIF, following the presentation of preliminary findings, participants (now sitting around tables by trust with a separate table for policy-makers) were asked to discuss the contents of the presentation. Only after this smaller group discussion, the group was invited to comment to the larger group. Policy participants commented on how very different ‘cultures’ seemed to exist at different trusts in relation to patient experience data work. Although time constraints limited this discussion at the cross-site JIF, most comments revolved around the issue of organisational cultures and, more specifically, around how certain types of data, such as the FFT, can be a means to an end to bring patient experience work on everybody’s radar in the organisation, and around the importance of the organisational culture in ensuring that staff are supported in the work they do. At the local JIFs, the presentation sparked useful discussion. At trust A, the divisional director of nursing, who had not been able to attend the cross-site JIF in London, was very interested in hearing more about the specific characteristics that made the ward accreditation system at one of the other trusts work so effectively, and she said she would contact colleagues at the trust in question and establish a dialogue for experience-sharing and organisational learning. This divisional director of nursing was also interested in whether or not involving other health professionals aside from nursing staff in patient experience data work might relieve nurses of some of the burden (the lead cancer nurse, on the other hand, suggested that patient experience data were and should be the remit of nursing staff, who are better equipped to conceive of care holistically). At trust D, in response to the presentation, participants resolved to consider ways to improve patient, family and carer experience learning sessions so that learning about areas of care that cut across wards or units is shared with appropriate staff. The example discussed was that of dementia care: when patient feedback relating to care of people living with dementia is discussed at a learning session, this could be communicated to the lead dementia nurse, or, alternatively, she could be invited to attend relevant learning sessions.
Implications for trusts
At the local JIFs, trust staff had the opportunity to discuss possible implications for their practices relating to patient experience data and improvement in care for patients. This took place during all stages of the JIF, but the research team set aside time at the end of each event to discuss implications in a more focused way. Although each trust had discussions specific to its own organisation (Table 7), there were, nevertheless, common issues that most trusts wanted to explore further and put in place practical measures to address. All trusts wanted to learn more about the ward accreditation scheme that was so successful at trust D and this trust’s general ‘culture’ of local empowerment. Members of staff at the trusts where local empowerment was not common practice discussed whether or not that would be possible given their own organisational cultures and constraints. One participant at trust E launched a discussion with other members of staff to explore the extent to which a ‘hybrid system’ could be fashioned. To varying degrees, participants at the JIFs also considered how QI could become better embedded in patient experience work and discussed the particular challenges of using QI methodology for patient experience improvement. Another key implication was promoting both better communication and linked working across the trust. In the words of one director of nursing, ‘we need to challenge silo working more generally’ (field notes, 14 March 2018, trust E). Thus, participants discussed how cancer CNSs, who are often skilled at translating patient experience data into improvement, could work more closely with ward staff to share learning and to suggest improvements. This could be achieved, for instance, by ensuring that CNSs are invited to the meetings that matrons hold with their ward sisters/managers. At trust D, where patient experience work is largely handled by wards and clinics, there was recognition that more could be carried out to support broader learning and communication between, for example, the dementia lead nurse and the services that have acted on patient feedback to improve care for patients living with dementia. Participants were also keen to encourage communication between cancer services (where it was recognised that patient experience work was fairly sophisticated) and other service areas such as dementia care. With regard to the work of patient experience teams, participants at the JIFs also raised questions about whether or not their teams had the right set of skills to support front-line staff to use patient experience data most effectively.
JIF | Key discussion points | ||
---|---|---|---|
Trust A | Interested in developing the idea of how the cancer model translates to other clinical areas, for example dementia | Wants to empower all staff to take leadership action. Enable them to respond positively to what patients are telling them | Would like to keep focusing on the best ways to connect data to real changes in improvement. Staff should take ‘ownership’ of data and the response to these |
Trust B | Encourage better communication: senior trust staff should explain to junior or front-line staff that changes are happening in response to patient feedback | In the implementation of their new ward assessment and accreditation framework, senior nursing staff will look at ways in which it can be used to improve autonomy and authority of ward staff | They will consider the ways in which their PALS team can become more of a ‘patient experience team’. This already chimes with recent proposals by the head of PALS to recruit a ‘patient experience officer’ |
Trust C | Prompted by a discussion by cancer nurses of the initiatives they are taking, there were calls for more communication across areas and wards to share practice | Shared learning could take place through visits and meetings and also more formal ‘learning sessions’ through the presentation of ‘storyboards’ | Improving and recognising the involvement of non-clinical staff (e.g. clerical staff) in patient experience improvement work |
Trust D | Will consider ways to improve patient, family and carer experience learning sessions so that learning about areas of care that cut across wards or units is shared with appropriate staff, for example when patient feedback relating to care of people living with dementia is discussed, this could be communicated to the lead dementia nurse. Alternatively, she could be invited to attend relevant learning sessions | Improving staff experience by organising a nurses day during which staff who have worked to improve patient experience are recognised by the trust and their colleagues | |
Trust E | Cancer CNSs and matrons could work more closely together – ensure that CNSs are invited to relevant meetings chaired by matrons. The matron of the cancer centre does this, but not all CNSs are based at the centre. For example, a lung CNS would like to regularly attend the cardiology matron’s meeting to work with ward sisters to improve patient experience | Embedding QI (as opposed to ‘service improvement’, which is financial). How to involve quality improvement in patient experience work, for example by training staff in QI methods related to patient experience. This will help patient experience team members support ward staff better | Improving resources and trust focus on understanding and improving experiences of care for patients living with dementia |
The discussions held as part of the JIFs, and particularly those from the early JIFs, had two essential effects on our overall analysis. First, they allowed us to ‘sense-check’ and test our preliminary findings with our research participants. This was particularly the case for the ways in which we conceptualised and discussed the three qualities of interactions involving data. Second, they forced us to think in terms that remained practically relevant to the work that trusts were doing around patient experience data. Having described, in some detail, the JIF processes and having drawn out some of the themes emerging from the various activities they encompassed, in Chapter 8 we reflect on the findings reported thus far and the key messages for research, policy and practice that they point to.
Chapter 8 Discussion
At the heart of this study is the link between patient experience data work and the real improvements in care to which it can lead. This is a question in which, as our JIFs confirmed, staff at participating trusts were deeply interested. In the previous chapters we presented different ways in which patient experience data are produced, processed and used/acted on to improve care. We also discussed what aspects of these processes our study trusts wanted to know more about and why. Here, we tease out the central themes illuminated by our analysis.
The multiple nature of patient experience data
In Chapter 5, we illustrated the multiple nature of any one type of patient experience data. We showed how each type of data (e.g. the FFT, the NCPES, patient stories) went through a series of transformations, appearing as a different ‘versions’ of themselves at each transformation. In Chapter 6, we provided examples of how different ‘versions’ of a type of data, the FFT in our case, could link to action for improvement in different ways. The multiple nature of every form of patient experience data is not only theoretically in line with our ANT-informed approach, but is also significant for NHS organisations of all kinds. It suggests that these organisations may benefit from taking into account the various ‘versions’ of and stages at which data can and do lead to improvement, and from discerning which versions are most effective in different places. The multiple nature of data also prompt us to move away from conceptualisations of data processes as more or less convoluted ‘journeys’ with predefined trajectories, and towards reading of data work as linear, often configured around clusters of activities corresponding to various versions of data. Although it was possible to identify certain steps and even some relatively linear sections of possible data ‘journeys’, we found that data came together in several places, moved both linearly and erratically, dispersed, reformed, multiplied and connected with other actors (humans, organisational systems and mechanisms, other data) at different moments in time, in planned and unplanned ways, and (inevitably) inside as well as outside the field of our observation. Looking at associations and interactions between different types and ‘versions’ of data and other (human and non-human) actors highlighted how linkages between data and action for improvement are not a final step in a ‘journey’, but occur at different times and involve a number of actors. The data journeys we observed looked more like inter–related clusters of interactions than like just slightly tortuous paths. This change in our conceptualisation of patient experience data work is important in that it alters where we look for impact/effects of data practices as well as how we think about amplifying or consolidating such useful effects.
In the ANT and post-ANT (that is, the heterogenous collection of case studies that have drawn on, translated and re-enacted ANT tools) research, a singular reality on which there can be different perspectives is not assumed. 33 Rather, different realities are enacted through practices. Philosopher Annemarie Mol37 provides a powerful example of how multiple realities are accomplished in her study of atherosclerosis, in which different objects are enacted through different practices so that atherosclerosis is a different entity in the clinic, on the angiogram, under the microscope and in the patient’s experience of it. 37 In the case of patient experience data, it is not just the case that different aspects of the FFT, or of the NCPES, or of a patient story are foregrounded or interacted with in different arrangements of people and things. Rather, different realities are enacted through different practices and, in turn, these different realities act in different ways that can all, in principle, be significant for improving the quality of care.
These reflections on the multiplicity of data also resonate with the point made by Martin et al. 27 about the potential of ‘soft intelligence’ (i.e. the ‘processes and behaviours associated with seeking and identifying soft data on the part of this individual or organizational actor, and with the knowledge producing activities of collation, synthesis, interpretation and application of insights’). Martin et al. 27 argue that the conventional sense-making frames of ‘aggregation’ (whereby similar reports from different sources indicate an issue worth investigating), ‘triangulation’ (whereby ‘soft data’ and ‘harder metrics’ are used to validate each other) and ‘instrumentalisation’ (of the ‘soft data’ to add ‘emotional force’ to an argument based on quantitative data) are important for understanding and making use of ‘soft data’, but that exclusive reliance on these frames is dramatically limiting, when not counterproductive. They also argue that more open-ended, dialogic ways of engaging and making use of ‘soft data’ are needed. The forms of ‘triangulation’ we were told about by our participants resonated with the strategies of ‘aggregation’ and ‘triangulation’ for the use of ‘soft intelligence’ described by Martin et al. 27 In order to avoid reproducing unhelpful hard/soft dichotomies, we prefer to speak of ‘rich data’ or even ‘textured data’ when referring to all of the sources of information that Martin et al. 27 contrapose to ‘hard metrics’, and would prefer to speak of ‘rich/textured intelligence’ rather than ‘soft intelligence’ to describe the processes and behaviours by which these data are collated, interpreted and put to use. However, terminology preferences aside, we find that our analysis supports Martin et al. ’s27 argument that, without underestimating the crucial function of aggregation and triangulation of ‘soft data’, there is much more that these data can do. 27 In particular, our analysis in this report documents some of the ways in which interactions that escape ‘systematisation’ draw the connection between data and quality improvements. Our ANT-oriented sensibilities and tools drew our attention to the qualities characterising these interactions, and we now turn to discussing them in more detail.
Three ‘qualities’ that make patient experience data work for care improvement
As we discussed in Chapter 6, in observing the ways in which patient experience data led to improvements in the quality of care, we identified three fundamental qualities characterising the interactions in which data participated. We called these three qualities autonomy (to act/to trigger action), authority (to act/trigger action and for action to be seen as legitimate) and contextualisation (to act meaningfully in a given situation).
We saw that patient experience data can gain these qualities from interactions with other entities. In some cases, these entities were human actors, as in the case of CNSs responding to the results of the NCPES and also to its shortcomings, in the case of dementia CNSs, or in the case of a ward clerk creating new systems for the generation of ‘rich data’. In other cases, these entities were organisational mechanisms and processes, as in the example of the ward accreditation system and associated ‘learning sessions’, and, finally, in some cases, these were external entities such as the CCG requesting information on action stemming from the FFT data at one of the trusts. Paying attention to these qualities is useful here, because if we follow Braithwaite’s62 advice and become better acquainted with complexity sciences perspectives on health-care organisation and improvement, we will find that:
Change, when it does occur, is always emergent. This is when features of the system, and behaviours, appear unexpectedly, arising from the interactions of smaller or simpler entities; thus, unique team behaviours emerge from individuals and their interactions.
Braithwaite62
Within this perspective, understanding the qualities that characterise some of these interactions is beneficial both because it orients us towards thinking in terms of systems and their properties (rather than, or as well as, in terms of causal logics, successful/unsuccessful improvement methods, and individual leadership skills) and because it sensitises us to the direction in which those promoting change (managers, improvement teams, QI scholars) might want to ‘nudge or perturb the system’. 62 In addition, our analysis showed that the people and the mechanisms able to put data in context and bringing about the autonomy and authority for data to trigger action for improvement allowed for the generation of additional data when existing data were inadequate or flawed. An implication of this is that, although it is certainly desirable and important to improve the quality of the patient experience data we gather (including its design and fitness for purpose, the manageability of its volume and the sustainability of the resources it requires), focusing mainly or solely on this without also ensuring that professional roles and organisational mechanisms exist to interpret, contextualise and act directly on data (including deciding which additional data are needed and how best to gather them) will always prove limited in impact.
Focusing on the qualities emerging from interactions and characterising them was made possible by a research approach that deliberately suspends ontological realism and organisational structures to focus on associations, micropractices and emerging orderings among human and non-human entities. For all the limitations and criticisms of the ANT (and post ANT),30,33 the sensibilities and practical orientations it affords can prove useful, as our study shows, in shedding light onto dimensions of organisational life that would be obscured by other approaches (e.g. approaches focusing primarily on organisational cultures, hierarchical structures and power relations). These dimensions, we argue, contribute to a nuanced, system-oriented understanding of patient experience data and improvement work. Our analysis shows, among other things, how sociomaterial approaches to researching organisational practices in health care can help address recent calls for more complexity-sensitive ways of going about improving care. Like complexity science for Braithwaite,63 the ANT also encourages us to:
. . . consider the dynamic properties of systems and the varying characteristics that are deeply enmeshed in social practices, whilst indicating that multiple forces, variables, and influences must be factored into any change process, and that unpredictability and uncertainty are normal properties of multi-part, intricate systems.
Braithwaite63
The examples we have provided in the previous chapters illustrate how these qualities characterise interactions and practices that lead to improvements in care. However, they also brought to light the complexities of understanding what counts as quality improvement in NHS organisations, and the importance for researchers and improvement practitioners to reflect on these complexities in more creative and comprehensive ways.
The many faces of quality improvement in acute NHS hospital trusts
In Chapters 5 and 6, we looked at the multiple nature of all patient experience data, their transformations and the variety of ways in which they can trigger action for care improvement. In Chapter 7, we discussed what the trusts participating in our study found significant in their own and their colleagues’ patient experience work, and the ways in which they realised improvements in patients’ experiences of care. In our observations, what we refer to as formal QI (i.e. the projects and priorities the organisation recognises under this label) and everyday QI (i.e. the multitude of actions and interactions that bring about change and improvement but are not formally reported or acknowledged as QI) appeared as two fairly distinct and only occasionally intersecting enterprises (e.g. when QI team members attended ‘learning sessions’ Example 2: authority, autonomy and data contextualisation in and through ‘learning sessions’; or how patient experience data can be seen as ‘grey’ by QI staff – Example 4: authority, autonomy and contextualisation and patient experience teams). It is worthwhile to briefly explore these different ways of doing improvement if we want to take seriously the various forms of impact that patient experience data have in practice.
All NHS trusts are familiar with the structured organisational processes and procedures that are identified as QI (what we are calling formal QI here). At each trust, there are projects that are formally labelled as QI projects, which then appear in the QI sections of committee and board agendas. Similarly, there are organisational mechanisms enabling response to patient safety data and clinical effectiveness indicators that are considered part of the trust’s QI work, which is then presented as such to a variety of audiences (internally, e.g. to staff with posters and notice boards, and externally, e.g. to regulatory bodies and professional associations). Although ‘quality’ in health care is commonly understood as comprising the three fundamental overlapping elements of patient safety, clinical outcomes and patient experience, QI work at NHS trusts often focuses largely on the first two elements. QI work in these two areas has, for quite some time now, been tied to the analysis and improvement of quantifiable indicators (e.g. number of falls, waiting times, instances of sepsis, mortality indicators). These quantifiable indicators tend to lead to the identification of clearly defined improvement objectives, which can be achieved through the systematic implementation of specific measures and the careful evaluation of their effects. For example, the introduction of a new standardised procedure for the prevention or early identification of sepsis, an organisational measure that would be identified as a QI project, might lead to a reduction in the cases of sepsis observed on a ward over a defined period of time; the consistent change in the measured indicator will then provide evidence of the effectiveness of the new standardised procedure. The improvement of patients’ experiences of care, at least as it is understood and practised at present, does not, and possibly should not, rely on similar forms of quantification (some attempts at quantification exist but they prove slippery and not particularly effective in practice), as the members of staff we have spoken to reiterated. Although the improvement in practice of patient safety and clinical outcomes also rests on a number of other factors and practices that cannot solely be measured by clear indicators, this has less of an effect on an organisation’s ability to account for improvements that can be unequivocally documented via validated metrics.
Furthermore, although there has been very little previous research exploring the responsibility and accountability for the collection and use of patient experience data, more attention has seemingly been paid to similar issues relating to patient safety and clinical outcomes data. Despite this, a narrative review of 122 publications mainly comprising cross-sectional studies in the USA still concluded that ‘efforts to create effective governance for quality and patient safety remain variable and are only just beginning’, and that future work should focus on developing conceptual models that can provide ‘appropriate bases for action’. 64 Later qualitative work by the same authors highlighted ‘the role of trust and intelligence in highlighting the potential dangers and limitations of approaches to hospital board oversight which have been narrowly focused on a risk-based view of organisational performance’,65 as well as the ‘performative dimensions’ of processing and interpreting patient safety data at board level. 66 More recent work has focused on the development and evaluation of board-level interventions to support senior hospital leaders to develop organisation-wide QI strategies without fully exploring how these strategies affect practices within hospitals. 67 In summary, there are very few existing studies of the day-to-day governance of data (whether relating to patient experience, patient safety or clinical outcomes) in hospitals.
In the previous chapters, we have illustrated how different ‘versions’ of any type of patient experience data can translate into different forms of action aimed at improving care (e.g. ‘the NCPES in board papers’ contrasted with ‘the NCPES at a cancer delivery group meeting’ or ‘the NCPES at a patient event’) and how data mobilise, and are mobilised by, other actors, for example CNSs, ward clerks, ward accreditation systems and CCGs, in ways that are linked to action for improvement. We have also described how a number of these activities are not necessarily formally reported or validated at an organisational level. These activities constitute a type of everyday QI that is perhaps less structured than formal QI projects and processes, but not any less relevant in terms of the overall improvements in care and, we argue, organisational learning. The problematic aspect of the co-existence of QI (or formal QI) work and all of the everyday QI activity is that the former is not value-neutral. A definition of QI that is commonly used in the health-care improvement literature is that developed by Øvretveit:68 ‘better patient experience and outcomes achieved through changing provider behaviour and organisation through using a systematic change method and strategies’. 68 Here ‘systematic’ methods and strategies suggest a scientific and rigorous approach, and elsewhere we are reminded of the important risks of QI work that are ‘undertaken in the form of time-limited small-scale projects, perhaps conducted as part of professional accreditation requirements’. 69
Although formal QI was not the focus of our research, QI as an organisational endeavour, grounded in the evidence of effectiveness and strategic planning and resourcing, emerged from our analysis as possessing qualities that make it encounter support as well as resistance in the interactions of which it is part. In our case, when formal QI processes interacted with patient experience data (as in the example of the patient, family and carer experience learning sessions we detailed in Chapter 6) they conferred authority to activities triggered by patient experience data. We suggest here that, rather than trying to apply QI methods and approaches to patient experience data, against the limitations and hazards of which Martin et al. 27,28 warn us, NHS organisations might want to pay attention to the relationships between what we have called formal QI and everyday QI. We noted earlier recent calls for more complexity-oriented work in health-care quality improvement work and improvement science. 63 We suggest that QI scholars and practitioners should further investigate the intricate relationships between the multitude of practices developing around patient experience data of all kinds and the organisationally legitimised projects and programmes under the official quality label, and between the entities (committees, reporting lines, documents) and professional roles associated with quality assurance and improvement and those associated with patient experience work.
Patient experience data work and nursing work
The secondary aim of our study was to understand and optimise the involvement and responsibilities of nurses in senior managerial and front-line roles with respect to such data. The vast majority of patient experience data work at our study sites was conducted in two distinct domains: that of nursing staff and that of the hospitals’ clerical staff, often aided by volunteers. In Chapter 5, we described the variation in composition and overall organisation of patient experience teams (where they existed as formally designated teams) across the five trusts in our study. We saw how, at all case study sites, responsibility for patient experience ultimately rested with the executive director of nursing and was often managed largely by the deputy or divisional directors of nursing. The front-line staff in charge of co-ordinating the generation of patient experience data at ward level (e.g. ward sister) or at service level (e.g. lead cancer nurse) were also nurses, whereas the work involved in processing the data and reporting the results from their analysis fell within the remit of patient experience team members and/or patient relations/PALS teams. We saw in our examples from the fieldwork how action for improvement stemming from data work often involved nursing staff, sometimes representing a fundamental aspect of their formal role (as in the case of CNSs), and at other times being one element of a wide range of responsibilities (as in the case of lead cancer nurses, matrons and senior trust nurses). Our analysis showed that nurses are clearly pivotal to all quality improvement work and particularly the everyday QI we discussed in some detail above.
Allen70 points out that there is ‘a growing recognition that nurses influence service quality as much through their contribution to health-care systems as through their clinical contact with patients’ and that both deserve careful consideration. 70 We suggest here that another component of the ‘invisible work of nurses’70 encompasses the significant effort and responsibilities implicating nursing staff in the generation, interpretation and translation into action of patient experience data. These responsibilities are founded on a similar professional gaze and mechanisms of action deployed by nurses in the management of patient care. In addition, our data suggest that both formal QI and everyday QI might benefit from a more in-depth understanding of the actual and potential role of other professional figures in the context of improvement driven by patient experience data. Although, as expressed at some of our JIFs, some members of staff would argue that patient experience data do not need to be ‘everybody’s business’ (in the sense that people in a position to bring about the qualities we have highlighted earlier are best placed to handle data and do something about them), overall, a significant majority of participants expressed the view that the more people in professional roles who are aware of and involved in patient experience work, the better. Usually, this referred to the involvement of physicians, who were visibly engaged and interested in patient experience data work at only one of the trusts we visited (trust E). However, we suggest that the relationship(s) between the work carried out by dedicated patient experience teams and other clerical staff with patient experience responsibilities (where no formally designated team is in place) and trust-wide QI efforts may require more immediate attention. We suggest that where there is a disconnect between the work that goes into collecting, collating and processing data, and the work aimed at improving quality of care in response to this data, there are missed opportunities for more effective distribution of the qualities that support everyday improvement work.
Engaging in conversation with participating trusts: the value of Joint Interpretive Forums
We want to say a final word here on our reflections of using JIFs in our study. We found that this form of engagement with trusts provided very useful research data (which we expected) but also generated ways of interacting with participants and crossing the often uncomfortable gap between health-care research and practice (which we did not expect). We planned the JIFs in our study with a view of sharing preliminary findings to help focus our thinking around actionable principles we could distil from them. Our essential concerns at the time of designing the study were to have participants provide a form of ‘respondent validation’71 (or member checking) of our preliminary findings and to generate insights into how these could link to implications of relevance to the participating trusts and to all of the acute trusts more broadly.
After running the cross-trust JIF event in London in January, we were surprised to see how involved some of the participating trusts became in organising and running a subsequent JIF workshop locally. In Chapter 7, we illustrated how both the cross-site and the local JIFs were organised, how participation and perspective-sharing were successfully fostered in the cross-site JIF, how ownership of the workshops played out at the trusts that were particularly involved with this process, and how we and the staff and patient representatives from the trusts gained additional insights from these exercises. We suggest that these novel interactions, where participants are invited to take part in highly interactive workshops that engage with both general policy issues and local contexts, would be more accurately seen as an integral part of any research study of an applied nature, rather than only a useful addition.
Limitations
Of course, every research study presents limitations and constraints. We discuss here those we could identify in our work and their multifaceted aspects.
Sample of participating trusts
At the time of designing the study, we planned to use the outcomes of the latest CQC Adult Inpatient Survey2 to identify trusts which scored differently on section 10 – overall views of care and services – of the survey and to recruit to the study two trusts performing ‘better than others’ on one or more dimensions of this section of the survey, one trust performing ‘about the same as others’ on all dimensions of this section and one trust performing ‘worse than others’ on one or more dimensions of this section. For the reasons explained in Chapter 3, we recruited one more site than initially planned but none from the ‘worse than others’ group. Although the sampling framework we proposed for the study was mainly a means to sample across a range of practices that we understood to be in flux and evolving, the fact remains that we did not carry out any ethnographic observations in trusts that did less well on section 10 of the survey. This is certainly a limitation, but only insofar as these observations would have added a useful layer of understanding to our study. The lack of observation of practices in trusts that are working towards significant improvements in the care they provide means our analysis is mostly of an appreciative kind. We have documented and shed light on the work patient experience data (and the interactions in which it participates) do to generate action for improvement. This is a valuable outcome, although more work needs to be carried out if we are to understand what kinds of obstacles and challenges may hinder these processes in organisations that are seen to perform less well on the basis of the annual inpatient survey results.
Patient voice
Although we interviewed patients, patient representatives and/or public governors at all of our trusts and had several informal conversations with them before and/or after board and committee meetings, the voice of patients is not prominent in this report. This is for two main reasons. One is that the main aim of the study was to understand and optimise trusts’ data practices around patient experience and, although patients were at the heart of the generation of patient experience data, they were less involved (with a few exceptions) with other aspects of data gathering, processing and utilising. The other reason is that in our analysis we focused on organisational processes and mechanisms and the role of nursing staff in these processes and mechanisms, in accordance with our secondary study aim. From this, we learnt that patient involvement in data work outside their generation requires a structured approach in its own right, either as an independent workstream or as an entirely separate research project, and that paying attention to all forms of involvement in data work would require more time and resources. Despite this, we found that speaking to patients and patient representatives in our fieldwork was an important part of our research, in that it helped us to gather a sense of the many perspectives that existed around patient experience data and certainly informed our thinking around the configuration of relationships and associations in which the data became embedded at various times.
Other data and data ‘failures’
In the previous chapters we discussed the benefits that can come from integrating patient experience data with patient safety and clinical outcomes data (e.g. discussion of ward accreditation system Example 3: integrated quality – the authority, autonomy and contextualisation conferred by a ward accreditation system). By ‘integrating’, we mean having systems in place that recognise the complementarity of these different dimensions of quality by attributing comparable, if not equal, weight and ‘voice’ in quality accounting processes. However, our study did not set out to study patient safety data or clinical outcomes data and the organisational management and use of such data. Although these data appear in our descriptions in more than one instance, we deliberately did not invest time to look closely at its characteristics, interactions, transformations, or enrolment in actor–networks. Our claims in this report are based on the appreciation of the status of this data from the meetings we observed, the documents we examined and the discussions we had and analysed. In addition, in Chapter 6, we reported an example of patient experience data failing to identify patients’ concerns or dissatisfaction. However, owing to our focus on the ways in which patient experience data could be seen to lead to action for improvement, this theme is mentioned only briefly here.
External entities
On reflection, we would have liked to include in our study some observations of the external entities that we identified as very present in the interactions with patient experience data. These included organisations such as Healthwatch, local CCGs and highly involved charities such as Macmillan, as well as patient experience data contractors such as Picker or Quality Health. Again, this would have been useful additional fieldwork but it would have implied a much longer study and the investment of greater resources. In our analysis, we have examined a number of ways in which these entities interacted with patient experience data and data-related work at participating trusts. However, we did not look at these entities’ processes and further interactions outside the trusts and suggest here that this would be a worthwhile focus of further research.
Chapter 9 Conclusions and implications for policy, practice and research
In this study, we have observed in detail patient experience data practices and their links with quality improvements at five acute NHS hospital trusts. We have discussed these practices with key informants at the trusts during our fieldwork and also shared our emerging findings with them in the ‘sense-making’ phase of the project. We have illustrated the multiple lives of patient experience data and detailed the types of transformations they can undergo. We have foregrounded the importance of paying attention to the interactions in which data are recruited and embedded, because these bring into being qualities that make the links with action for care improvements possible. More specifically, we have discussed how patient experience data are more likely to lead to care improvements when they participate in interactions that are characterised by the three qualities of authority, autonomy and contextualisation. These interactions can involve human actors (e.g. nurses in specific roles such as CNSs) and non-human actors (e.g. external organisations such as CCGs, organisational processes such as trust-wide ward accreditation systems, and QI tools and techniques such as those used in one trust’s ‘learning sessions’). We have also drawn attention to the fact that sometimes human actors, who are in a position to bring about these qualities, can generate further patient experience data and action for improvement in response to data flaws and limitations. We have shown how nursing staff are responsible and accountable for patient experience at the trust level, and organise and conduct much of the work that leads to action to improve the quality of care in response to feedback on wards and across service areas; we have highlighted the role of the CNS as a key figure in this latter regard (both in the area of cancer care, where this is a consolidated role of responsibility for patient experience, and in the area of dementia care, where this role is less frequently in place but proves to be equally crucial). Finally, we have briefly discussed how the ANT-informed research approaches give us access to less obvious dimensions of organisational practices and illustrated the value of sense-making work along the lines of highly participative workshops (JIFs in our case) in research efforts that have a clear applied element.
Our findings have the following implications for policy and practice:
-
For patient experience data to lead to improvements in the quality of care, it is important to improve the data that the NHS trusts collect and to optimise the quantity that is collected. However our data suggest that this effort alone yields limited benefits if attention is not also paid to the qualities, in particular autonomy, authority and contextualisation, that need to characterise the interactions between the data and other (human and non-human) actors in order for data to lead to care improvements.
-
Our analysis indicates that quality improvement research and practice may benefit from approaches that take into due account the emergent nature of much improvement work and, more specifically, of the complex relationships between institutionally recognised QI efforts (formal QI) and the vast amount of unsystematised improvement work that takes place in response to patient experience data in less well-documented ways (everyday QI).
-
Our study has identified a frequent disconnect between the data generation and management work carried out by patient experience teams, or clerical staff with patient experience responsibilities where formally designated teams do not exist, and the action for care improvement resulting from those data, which is more often the responsibility of nursing and other clinical staff. Acute NHS hospital trusts may be able to optimise the use of patient experience data by exploring configurations of and communication between different professional figures and teams involved in patient experience work.
-
Organisational tools and mechanisms that include patient experience data in interactions characterised by authority, autonomy and possibility for contextualisation may make external drivers, such as national targets or the mandatory nature of data generation, less critical than they would be in the absence of such mechanisms. Accordingly, organisations that successfully establish mechanisms that embed action as a result of patient experience data work may find external drivers less important and potentially burdensome.
-
Finally, our analysis suggests that there are opportunities for organisational learning in the exchange of experiences within and between organisations, where some of the models orienting service response to data in the context of cancer care may prove, with due adjustments, viable and promising in the patient experience data work aimed at improving care for people with dementia.
Our recommendations for research are:
-
Further research examining the ways in which patient safety, patient experience and clinical outcomes data intersect and interact in the everyday practices of hospital work (e.g. care on the wards, meetings, reports) and inform particular forms of improvement work would provide useful insights to inform developments in improvement science.
-
Organisations external to NHS trusts such as CCGs, large charities such as Macmillan Cancer Support, and contractors such as Quality Health and Picker play an important role in the organisation of the micropractices of patient experience data work. Further research should consider exploring in more detail the ways in which these organisations enable or constrain patient experience data work and QI, especially the everyday QI we have described here.
-
The highly participative and practically relevant ‘sense-making’ afforded by multistakeholder workshops support an engaging framework for applied health-care research. These workshops strengthen research collaborations between academia and health-care providers and contribute to participants’ ownership of at least part of the research process. Further research into the longer-term impact of contributing to and participating in such workshops on individuals and organisations is desirable.
Acknowledgements
Without the engagement, enthusiasm and infinite patience of our collaborators at the five study trusts, we would have found the research presented in this report impossible to complete. We thank the NHS staff and volunteers who approved their trust’s participation in the study, gave us their time, agreed to be interviewed, allowed us to shadow them in their work and observe meetings, introduced us to other colleagues and shared documents and ideas with us. We are also grateful to the NHS patients who agreed to talk to us about their experiences of providing feedback to hospital trusts.
We thank Dr Mary Adams, Research Fellow at King’s College London, for her contributions to the project in its early stages and for her initial fieldwork at one of the participating NHS trusts. We also thank Dr Alessia Costa, Research Associate at King’s College London, for observing and recording the proceedings of the cross-site JIF in London. Thanks to Christine Chapman, independent consultant and PPI advisor, for her involvement in the design stage and early phases of the study, and for contributing to shaping our successful funding application and guiding our PPI strategy.
Finally, we are grateful to the following members of the Advisory Group for their advice and support throughout the project:
-
Catherine Dale, Guy’s and St Thomas’ NHS Foundation Trust
-
Mairead Griffin, Guy’s and St Thomas’ NHS Foundation Trust
-
Nicky Hayes, King’s College Hospital NHS Foundation Trust
-
Annie Laverty, Northumbria Healthcare NHS Foundation Trust
-
David McNally, NHS England
-
Martine Price, Aneurin Bevan University Health Board
-
Janice Sigsworth, Imperial College Healthcare NHS Trust
-
John Sprange, PPI advisor
-
Sylvia Tang, Priory Group
-
Anna Torode, PPI advisor.
Contributions of authors
Sara Donetto (Lecturer, King’s College London) was the principal investigator, led the overall study design, co-ordinated the study team, conducted data collection and analysis at one study trust, contributed to overall data analysis, and led and contributed substantially to report writing.
Amit Desai (Research Fellow, King’s College London) conducted data collection and analysis at three study trusts, contributed to overall data analysis, contributed substantially to report writing and gave final approval of the manuscript.
Giulia Zoccatelli (Research Associate, King’s College London) conducted data collection and analysis at two study trusts, contributed to overall data analysis and report writing, and gave final approval of the manuscript.
Glenn Robert (Professor of Healthcare Quality and Innovation, King’s College London) contributed to the overall study design, contributed to data analysis and report writing, and gave final approval of the manuscript.
Davina Allen (Professor of Nursing, Cardiff University) contributed to the overall study design, data analysis and report writing, and gave final approval of the manuscript.
Sally Brearley (Independent PPI Advisor) contributed to the overall study design, data analysis and report writing, and gave final approval of the manuscript.
Anne Marie Rafferty (Professor of Nursing Policy, King’s College London) contributed to the overall study design, data analysis and report writing, and gave final approval of the manuscript.
Publication
Desai A, Zoccatelli G, Adams M, Allen D, Brearley S, Rafferty AM, et al. Taking data seriously: the value of actor–network theory in rethinking patient experience data. J Health Serv Res Policy 2017;22:134–6.
Data-sharing statement
All qualitative data generated that can be shared are contained within the report. All data queries and requests should be submitted to the corresponding author for consideration.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care.
References
- NHS Improvement . Patient Experience Improvement Framework 2018. https://improvement.nhs.uk/resources/patient-experience-improvement-framework/ (accessed 16 August 2019).
- Care Quality Commission . Adult Inpatient Survey 2018 2019. www.cqc.org.uk/publications/surveys/adult-inpatient-survey-2018 (accessed 10 July 2019).
- Coulter A, Locock L, Ziebland S, Calabrese J. Collecting data on patient experience is not enough: they must be used to improve care. BMJ 2014;348. https://doi.org/10.1136/bmj.g2225.
- Dr Foster Intelligence . The Intelligent Board 2010: Patient Experience 2010.
- Robert G, Cornwell J, Brearley S, Foot C, Goodrich J, Joule N, et al. What Matters to Patients? Developing the Evidence Base for Measuring and Improving Patient Experience. Department of Health and Social Care and Institute for Innovation and Improvement; 2012.
- Rozenblum R, Lisby M, Hockey PM, Levtzion-Korach O, Salzberg CA, Efrati N, et al. The patient satisfaction chasm: the gap between hospital management and frontline clinicians. BMJ Qual Saf 2013;22:242-50. https://doi.org/10.1136/bmjqs-2012-001045.
- Ziebland S, Coulter A, Calabrese JD, Locock L. Understanding and Using Health Experiences: Improving Patient Care. Oxford: Oxford University Press; 2013.
- Ipsos MORI Social Research Institute . Patient Feedback Survey 2012: National and Strategic Health Authority Summary Report 2012.
- de Silva D. Evidence Scan No. 18: Measuring Patient Experience. London: The Health Foundation; 2013.
- National Institute for Health and Care Excellence (NICE) . Patient Experience in Adult NHS Services: Improving the Experience of Care for People Using Adult NHS Services 2012.
- O’Hara JK, Lawton RJ, Armitage G, Sheard L, Marsh C, Cocks K, et al. The Patient Reporting and Action for a Safe Environment (PRASE) intervention: a feasibility study. BMC Health Serv Res 2016;16. https://doi.org/10.1186/s12913-016-1919-z.
- Sheard L, Marsh C, O’Hara JK, Armitage G, Wright J, Lawton R. The patient feedback response framework – understanding why UK hospital staff find it difficult to make improvements based on patient feedback: a qualitative study. Soc Sci Med 2017;178:19-27. https://doi.org/10.1016/j.socscimed.2017.02.005.
- Grob R, Schlesinger M, Parker AM, Shaller D, Barre LR, Martino SC, et al. Breaking narrative ground: innovative methods for rigorously eliciting and assessing patient narratives. Health Serv Res 2016;51:1248-72. https://doi.org/10.1111/1475-6773.12503.
- Bion J, Taylor C, Tarrant C, Sullivan P, Mullhi R, . Patient Experience and Reflective Learning (PEARL). Southampton: NIHR; 2016.
- Lawton R, Marsh C, O’Hara J, Sheard L, Dexter M, . Understanding and Enhancing how Hospital Staff Learn From and Act on Patient Experience Data. Southampton: NIHR; 2015.
- Powell J, Atherton H, Mazanderani F, Williams V, de Iongh A, . Improving NHS Quality Using Internet Ratings and Experiences (INQUIRE). Southampton: NIHR; 2015.
- Weich S, Crepaz-Keay D, Newton E, Larkin M, El Enany N, . Evaluating the Use of Patient Experience Data to Improve the Quality of Inpatient Mental Health Care. Southampton: NIHR; 2015.
- Flott KM, Graham C, Darzi A, Mayer E. Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed. BMJ Qual Saf 2017;26:502-7. https://doi.org/10.1136/bmjqs-2016-005223.
- Burt J, Campbell J, Abel G, Aboulghate A, Ahmed F, Asprey A, et al. Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience. Programme Grants Appl Res 2017;5. https://doi.org/10.3310/pgfar05090.
- Graham C, Käsbauer S, Cooper R, King J, Sizmur S, Jenkinson C, et al. An evaluation of a near real-time survey for improving patients’ experiences of the relational aspects of care: a mixed-methods evaluation. Health Serv Deliv Res 2018;6. https://doi.org/10.3310/hsdr06150.
- Ziewitz M. Experience in action: moderating care in web-based patient feedback. Soc Sci Med 2017;175:99-108. https://doi.org/10.1016/j.socscimed.2016.12.028.
- Pflueger D. Accounting for quality: on the relationship between accounting and quality improvement in healthcare. BMC Health Serv Res 2015;15. https://doi.org/10.1186/s12913-015-0769-4.
- Gleeson H, Calderon A, Swami V, Deighton J, Wolpert M, Edbrooke-Childs J. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open 2016;6. https://doi.org/10.1136/bmjopen-2016-011907.
- Greenhalgh J, Dalkin S, Gooding K, Gibbons E, Wright J, Meads D, et al. Functionality and feedback: a realist synthesis of the collation, interpretation and utilisation of patient-reported outcome measures data to improve patient care. Health Serv Deliv Res 2017;5. https://doi.org/10.3310/hsdr05020.
- Desai A, Zoccatelli G, Adams M, Allen D, Brearley S, Rafferty AM, et al. Taking data seriously: the value of actor-network theory in rethinking patient experience data. J Health Serv Res Policy 2017;22:134-6. https://doi.org/10.1177/1355819616685349.
- Renedo A, Komporozos-Athanasiou A, Marston C. Experience as evidence: the dialogic construction of health professional knowledge through patient involvement. Sociol 2017;52:778-95. https://doi.org/10.1177/0038038516682457.
- Martin GP, McKee L, Dixon-Woods M. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety. Soc Sci Med 2015;142:19-26. https://doi.org/10.1016/j.socscimed.2015.07.027.
- Martin GP, Aveling EL, Campbell A, Tarrant C, Pronovost PJ, Mitchell I, et al. Making soft intelligence hard: a multi-site qualitative study of challenges relating to voice about safety concerns. BMJ Qual Saf 2018;27:710-17. https://doi.org/10.1136/bmjqs-2017-007579.
- Callon M, Callon M, Law J, Rip A. Mapping the Dynamics of Science and Technology. London: Palgrave Macmillan; 1986.
- Law J, Hassard J. Actor Network Theory and After. Oxford: Wiley-Blackwell Publishers; 1999.
- Latour B. Science in Action: How to Follow Scientists and Enginneers Through Society. Cambridge, MA: Harvard University Press; 1987.
- Latour B. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press; 2005.
- Michael M. Actor Network Theory: Trials, Trails and Translations. London: Sage; 2017.
- Allen D. Understanding context for quality improvement: artefacts, affordances and socio-material infrastructure. Health 2013;17:460-77. https://doi.org/10.1177/1363459312464072.
- Allen D. Reconceptualising holism within the contemporary nursing mandate: from individual to organisational relationships. Soc Sci Med 2014;119:131-8. https://doi.org/10.1016/j.socscimed.2014.08.036.
- Berg M. Rationalizing Medical Work: Decision-Support Techniques and Medical Practices. Cambridge, MA: MIT Press; 1997.
- Mol A. The Body Multiple: Ontology in Medical Practice. Durham, NC: Duke University Press; 2002.
- Sandelowski M. Advanced Qualitative Research for Nursing. Oxford: Blackwell; 2003.
- Timmermans S, Berg M. The Gold Standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care. Philadelphia, PA: Temple University Press; 2010.
- Broer T, Nieboer AP, Bal RA. Opening the black box of quality improvement collaboratives: an Actor-Network theory approach. BMC Health Serv Res 2010;10. https://doi.org/10.1186/1472-6963-10-265.
- Fenwick T. Sociomateriality in medical practice and learning: attuning to what matters. Med Educ 2014;48:44-52. https://doi.org/10.1111/medu.12295.
- Fenwick T. Knowledge circulations in inter-para/professional practice: a sociomaterial enquiry. J Vocational Educ Training 2014;66:264-80. https://doi.org/10.1080/13636820.2014.917695.
- Orlikowski WJ. Sociomaterial practices: exploring technology at work. Organ Stud 2007;28:1435-48. https://doi.org/10.1177/0170840607081138.
- Orlikowski WJ. The sociomateriality of organisational life: considering technology in management research. Cambridge J Econ 2009;34:125-41. https://doi.org/10.1093/cje/bep058.
- Orlikowski WJ, Scott SV. 10 Sociomateriality: challenging the separation of technology, work and organization. Acad Manag Ann 2008;2:433-74. https://doi.org/10.1080/19416520802211644.
- Neuwelt PM, Kearns RA, Browne AJ. The place of receptionists in access to primary care: challenges in the space between community and consultation. Soc Sci Med 2015;133:287-95. https://doi.org/10.1016/j.socscimed.2014.10.010.
- Sikveland R, Stokoe E, Symonds J. Patient burden during appointment-making telephone calls to GP practices. Patient Educ Couns 2016;99:1310-18. https://doi.org/10.1016/j.pec.2016.03.025.
- Allen D. Lost in translation? ‘Evidence’ and the articulation of institutional logics in integrated care pathways: from positive to negative boundary object?. Sociol Health Illn 2014;36:807-22. https://doi.org/10.1111/1467-9566.12111.
- Allen D. From boundary concept to boundary object: the practice and politics of care pathway development. Soc Sci Med 2009;69:354-61. https://doi.org/10.1016/j.socscimed.2009.05.002.
- Locock L, Coulter A, Churchill N, Rees S, Graham C, . Understanding How Frontline Staff Use Patient Experience Data for Service Improvement – An Exploratory Case Study Evaluation and National Survey (US-PEx). Southampton: NIHR; 2015.
- Mohrman SA, Gibson CB, Mohrman AM. Doing research that is useful to practice a model and empirical exploration. Acad Manage J 2001;44:357-75. https://doi.org/10.5465/3069461.
- Latour B. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press; 2005.
- NHS England . The Friends and Family Test 2014. www.england.nhs.uk/wp-content/uploads/2015/07/fft-guidance-160615.pdf (accessed 8 June 2018).
- Quality Health . National Cancer Patient Experience Survey 2015. www.quality-health.co.uk/surveys/national-cancer-patient-experience-survey (accessed 20 August 2019).
- Royal College of Psychiatrists . Involving Services Users and Carers 2019. www.rcpsych.ac.uk/improving-care/ccqi/national-clinical-audits/national-audit-of-dementia/information-for-people-with-dementia-and-carers (accessed 20 August 2019).
- NHS England . Guidance on the Submission of Acute Friends and Family Test Data 2015. www.england.nhs.uk/wp-content/uploads/2015/02/fft-sub-guide-acute.pdf (accessed 8 June 2018).
- Quality Health Ltd . National Cancer Patient Experience Survey 2016: National Results Summary 2017.
- NHS England, Public Health England . CancerData Dashboard n.d. www.cancerdata.nhs.uk (accessed 16 August 2019).
- Weick KE, Sutcliffe KM, Obstfeld D. Organizing and the process of sensemaking. Organ Sci 2005;16:409-21. https://doi.org/10.1287/orsc.1050.0133.
- Kurtz CF, Snowden DJ. The new dynamics of strategy: sense-making in a complex and complicated world. IBM Systems J 2003;42:462-83. https://doi.org/10.1147/sj.423.0462.
- Snowden D. Cynefin, a Sense of Time and Place: An Ecological Approach to Sense Making and Learning in Formal and Informal Communities n.d.:1-11.
- Braithwaite J. Changing how we think about healthcare improvement. BMJ 2018;361. https://doi.org/10.1136/bmj.k2014.
- Braithwaite J, Churruca K, Long JC, Ellis LA, Herkes J. When complexity science meets implementation science: a theoretical and empirical analysis of systems change. BMC Med 2018;16. https://doi.org/10.1186/s12916-018-1057-z.
- Millar R, Mannion R, Freeman T, Davies HT. Hospital board oversight of quality and patient safety: a narrative review and synthesis of recent empirical research. Milbank Q 2013;91:738-70. https://doi.org/10.1111/1468-0009.12032.
- Millar R, Freeman T, Mannion R. Hospital board oversight of quality and safety: a stakeholder analysis exploring the role of trust and intelligence. BMC Health Serv Res 2015;15. https://doi.org/10.1186/s12913-015-0771-x.
- Freeman T, Millar R, Mannion R, Davies H. Enacting corporate governance of healthcare safety and quality: a dramaturgy of hospital boards in England. Sociol Health Illn 2016;38:233-51. https://doi.org/10.1111/1467-9566.12309.
- Jones L, Pomeroy L, Robert G, Burnett S, Anderson JE, Morris S, et al. Explaining organisational responses to a board-level quality improvement intervention: findings from an evaluation in six providers in the English National Health Service. BMJ Qual Saf 2018;28:198-204. https://doi.org/10.1136/bmjqs-2018-008291.
- Øvretveit J. Does Improving Quality Save Money? A Review of Evidence of Which Improvements to Quality Reduce Costs to Health Service Providers. London: The Health Foundation; 2009.
- Dixon-Woods M, Martin GP. Does quality improvement improve quality?. Future Healthcare J 2016;3:191-4. https://doi.org/10.7861/futurehosp.3-3-191.
- Allen D. The Invisible Work of Nurses: Hospitals, Organisations and Healthcare. London: Routledge; 2014.
- Hammersley M, Atkinson P. Ethnography: Principles in Practice. London: Routledge; 2007.
Appendix 1 Additional study information
The following documents are available at the project web page (URL: www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415608/#/):
-
participant information sheet for patients
-
participant information sheet for staff interviews and observations
-
patient participant consent form for interview
-
participant consent form for photograph
-
staff participant consent form for interview
-
patient consent form for observation
-
document release consent form.
Appendix 2 Topic guide for interviews with patients/patients’ carers
Appendix 3 Topic guide for interviews with front-line staff
Appendix 4 Topic guide for interviews with managerial staff
Appendix 5 Anonymised poster with summary of trust information as used in Joint Interpretive Forum
Logo reproduced with permission from King’s College London.
List of abbreviations
- A&E
- accident and emergency
- ANT
- actor–network theory
- CCG
- Clinical Commissioning Group
- CNS
- clinical nurse specialist
- CQC
- Care Quality Commission
- CQUIN
- Commissioning for Quality and Innovation
- FFT
- Friends and Family Test
- HSDR
- Health Services and Delivery Research
- JIF
- Joint Interpretive Forum
- MDT
- multidisciplinary team
- NAD
- National Audit of Dementia
- NCPES
- National Cancer Patient Experience Survey
- NIHR
- National Institute for Health Research
- PALS
- Patient Advice Liaison Service
- PDSA
- plan, do, study, act
- PPI
- patient and public involvement
- QI
- quality improvement
- R&D
- research and development