Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its proceeding programmes as project number 10/1012/09. The contractual start date was in September 2011. The final report began editorial review in October 2012 and was accepted for publication in March 2013. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors' report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Permissions
Copyright statement
© Queen's Printer and Controller of HMSO 2013. This work was produced by Hanney et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Chapter 1 Background
The research question
This evidence synthesis responds to a call by the National Institute for Health Research (NIHR) Service Delivery and Organisation (SDO) programme in 2010 that requested a synthesis ‘which maps and explores the likely or plausible mechanisms through which research engagement might bear on wider performance at clinician, team, service or organisational levels’. 1 The invitation to tender (ITT) called for a review of the relevant research literature to see what empirical support exists for any theoretical mechanisms, and stated that ‘A theoretically and empirically grounded assessment of the relationships, if any, between research engagement and performance should explore issues of causation’. 1 These statements led us originally to formulate the research question we intended to address in this review as follows: Engagement in research: does it improve performance at a clinician, team, service and organisational level in health-care organisations?
We found there was a widely held assumption that research engagement improved health-care performance at the various levels, as, for example, expressed in the 2010 White Paper, Equity and excellence: liberating the NHS2 and in a 2010 briefing from the NHS Confederation. 3 The briefing reported possible explanations, expert comments and some evidence to support this view, including reference to the web site of the Association of UK University Hospitals (AUKUH) which had issued a press release showing that university hospitals on average did better than non-AUKUH hospitals in the quality of service scores produced by the Care Quality Commission. 4 However, this AUKUH analysis was very brief and did not take into account other possible factors. In practice, in our early explorations at the time of the ITT in mid-2010, we could identify very little direct empirical evidence to support this assumption. Nevertheless, we identified diverse bodies of knowledge that we thought might help to address the question.
Given these circumstances, we knew that this review would not be straightforward and that we would have to think constructively about both its scope and our methodological approach as we explored the field, making revisions to our proposed approach where needed. These changes are discussed in more detail in Chapter 3. Below we set out the interpretation we put on key terms, the scope of the study we eventually conducted, and provide a brief guide to the contents of the remainder of this report.
Interpretation of key terms
We started the overall evidence synthesis by making the initial scoping, planning and mapping stage as wide as possible in an attempt to maximise our chances of capturing any coherent bodies of empirical evidence relating to the question or to any mechanisms that might be relevant. This wide scope also included our initial interpretations of the terms ‘research engagement’, ‘engagement in research’, ‘performance’ and ‘mechanisms’ that had been used in the ITT.
‘Research engagement’ and ‘engagement in research’
In line with discussions about reforms to health research systems which illustrated the Government's desire to involve the health-care system in research,5 we initially interpreted the term ‘research engagement’ broadly. We also recognised the need to look for both positive and negative impacts of research engagement.
Driving this ITT was a concern to improve understanding of the impact on health-care performance of engagement in research. With this in mind, we took ‘engagement in research’ to mean a deliberate set of intellectual and practical activities undertaken by health-care staff and organisations. These include the following:
-
Health-care staff (as individual clinicians and as teams):
-
being involved in actively undertaking research studies
-
playing an active role within the whole research cycle, including research design and commissioning
-
undertaking research training for a research degree
-
playing an active role in research networks, partnerships or collaborations.
-
-
Health-care services and organisations:
-
supporting health-care staff in the activities outlined in (a) above
-
playing an active role in research networks, partnerships or collaborations, either internally (within the particular service or organisation) or externally (with other organisations and researchers).
-
The ITT explicitly and prominently highlighted the literature on engagement in medical leadership as being a different but related domain to that of research engagement, implying, we assumed, that it might potentially make an important contribution to the requested evidence synthesis. Our interpretation of ‘engagement in research’ as an active process undertaken by health-care staff ties in with the literature on medical/workforce engagement and impact on performance. 6–9 For example, Bakker8 defines work engagement as a ‘positive, fulfilling, work-related state of mind that is characterised by vigour, dedication, and absorption’ (p. 51).
In a recent paper relating medical engagement to organisational performance, Spurgeon et al. 9 talk about ‘the active and positive contribution of doctors within their normal working roles to maintaining and enhancing the performance of the organisation which itself recognises this commitment in supporting and encouraging high quality care’ (p. 115) (our italics). Thus, health-care services and organisations also have an active role to play. The relation between workforce engagement and performance is dependent on a symbiosis between organisations and those who work within them. 10,11 This organisational support and encouragement is a two-way process involving organisations working to engage employees and the latter having a degree of choice as to their response. 12,13
Although such insights were helpful, the literature on medical/workplace engagement did not, as we initially hoped, prove a fruitful source of empirical findings on our own narrower question. There were two main reasons. First, ‘engagement’ is a complex term that means different things to different people. It is variously interpreted in the literature and there is no universal definition:13 this can impede the identification and interpretation of empirical findings. Thus, while many authorities advocate physician engagement as the key to organisational performance,14 a common theme in papers discussing this issue is that there is limited empirical evidence about the positive impact of enhanced medical engagement on organisational performance,7,9,15 although it has been demonstrated that lack of engagement presents significant problems in the organisational pursuit of change and improvement. 7 Spurgeon et al. 9 summarise the current position, describing ‘engagement’ as a multifaceted construct whose complexity has probably contributed to the lack of clarity in understanding the links between levels of engagement and performance. This study did find a correlation between medical engagement and service improvement.
Second, the literature on medical/workplace engagement takes little heed of research activity as a factor in improving performance, and the ability to ‘engage in research’ (however defined) is not seen as one of the core competencies in medical professionalism. Thus, it is claimed that there ‘is increasing acknowledgement by the medical profession that doctors need to be competent managers and leaders at all stages of their careers’ (p. 3) (our italics) but engagement in research as a possible part of those careers is not mentioned in this paper. 7 In fact, research is only mentioned once – the authors discuss the skills required to enable doctors to function efficiently and effectively within complex systems and quote Tooke's16 view that ‘the doctor's role as diagnostician and the handler of clinical uncertainty and ambiguity requires a profound educational base in science and evidence-based practice (EBP) as well as research awareness’ (p. 17). Tooke's16 insight is confirmed by others. In a broad ranging discussion of the changing nature of medical professionalism in the context of practice innovation, Mechanic17 talks about ‘practicing in an evidence-based way that acknowledges and takes account of current medical research and understanding’ (p. 329). However, there is no discussion in these key papers (or to our knowledge in the medical engagement literature more generally) of what this research awareness is or should be, of how it is to be achieved and maintained, or how it might (in the context of medical engagement more generally) be a factor that contributes to improved performance.
In this context we note that the terms ‘engagement in research’ and ‘engagement with research’ are sometimes used interchangeably in the literature. Given this, it seemed relevant at the start of our review to explore how far a broader definition of research engagement could also include engagement with research, taking this term to mean a less substantial involvement at individual and team level related more to receiving and transmitting the findings of research. It could therefore include aspects of activities such as continuing medical education (CME) and knowledge mobilisation, which often focus on encouraging research utilisation alone, and not on research utilisation as an integral phase in the whole research cycle. This is an important distinction between ‘engagement with research’ and ‘engagement in research’, and since our brief was to explore the whole pathway from research engagement to performance we finally decided to concentrate our resources primarily on the interpretation of research engagement as ‘engagement in research’ in the sense given above.
‘Performance’
In health care, the case for the universal measurement of clinical and organisational performance is well established,18 but the specific nature of performance measurement is a contested topic and there is no neat prescription for the design of performance measurement systems in the NHS, or more generally. 19 A wide range of measures has been used: ‘there exists, for any organisation, a range of possible measures. This is true especially of health care, with measures of clinical process, health outcomes, access, efficiency, productivity and employee variables all offering some potential’ (p. 107). 20
Complicating the picture further, Mannion et al. 21 suggest that different channels of communication may convey different performance information.
A variety of systems to assess aspects of NHS performance have been used in recent years – including the star rating system for NHS trusts, clinical governance reviews, National Service Framework reviews, national and local audits, performance monitoring by strategic health authorities, and Quality Accounts. 21,22
Reflecting the origins of performance measurement in industry, much of the literature on health-care performance initially focused on health-care organisations and on their use of measures of activity and cost. 23 Over time health-care organisations' need to demonstrate clinical quality as well as efficiency24,25 led to the development of measures of clinical output and outcome and a move to a balanced scorecard approach. 23 Within the context of the 2008 Darzi review,26 and subsequent Government White Papers,2 in the NHS in England this emphasis on the measurement of the quality of care delivered has continued. The development of clinical audit has also focused attention on the measurement of clinical performance against measures of clinical process and health outcome. In some clinical fields in the NHS in England, clinical audit is now mandatory and the participation of health-care organisations in these clinical audits is monitored through the Quality Accounts that NHS health-care providers are required to produce annually. Despite all these changes there has, until the relatively late arrival in 2010 of the Quality Accounts, been a marked absence of indicators relating to research activity or engagement in research in NHS trusts. If Pettigrew et al. 27 are right in their contention that performance measurement systems are an important determinant of organisational performance in their own right [‘you get what you measure’ (p. 8)], then this earlier failure to develop research indicators suggests that although medical research has generally been seen as a good thing, it took a shift to a quality agenda for research activity and research engagement in trusts to be seen as potential drivers of improved organisational performance.
Given all this, we decided that exploring improvements in health-care performance at all levels across all possible measures in a focused review would have been a very large and perhaps impossible task. Reflecting the quality agenda, we therefore decided to concentrate our focused review on papers that discussed improvements in clinical processes and outcomes.
‘Mechanisms’
In a broad sense ‘mechanisms’ have been defined as ‘underlying entities, processes, or [social] structures which operate in particular contexts to generate outcomes of interest’ (p. 368). 28 However, in this review we took a simpler interpretation related to our notion of engagement in research as a set of activities, and defined ‘mechanisms’ as levers that instigate or sustain a relationship between those activities and improved health-care performance.
Scope of the review
In an attempt to identify the nature of whatever evidence might already have been collated, we initially explored and mapped a wide range of bodies of knowledge. These are listed in Chapter 3. However, it became clear that no single, readily identifiable area of knowledge would comprehensively address the review question. The lack of direct evidence about research engagement in these bodies of knowledge indicated that, in any overall assessment of how health-care performance might be improved, many other factors are analysed ahead of research engagement. It also suggested that our search for evidence about the impact of research engagement would be complex. Furthermore, within the many bodies of research that explore the diffusion of innovations and were reviewed by Greenhalgh et al. ,29,30 not only does research engagement appear to play a limited direct role in the explanations discussed, but also there is a great variability of circumstances in which innovations can be adopted. We assumed that there was likely to be a similar variability in the mechanisms through which research engagement might improve health-care performance, and in the circumstances in which those mechanisms would succeed.
In the next chapter, we highlight findings from a number of existing reviews of varying degrees of relevance to our evidence synthesis. Of these, one review impacted directly on the structure and scope of our evidence synthesis. This was a systematic review by Clarke and Loudon published in 201131 (i.e. after proposals for our project had been submitted). It focused on the narrower topic of the effect on patients of their health-care practitioner's or institution's participation in clinical trials. It concluded that there might be a ‘trial effect’ of better outcomes, greater adherence to guidelines and more use of evidence but, crucially, went on to state ‘the consequences for patient health are uncertain and the most robust conclusion may be that there is no apparent evidence that patients treated by practitioners or in institutions that take part in trials do worse than those treated elsewhere’ (p. 1).
Given this uncertainty about whether or not research engagement improves performance, we decided first to address this question in a focused review before moving on to questions about the mechanisms through which such changes might come about. It was also apparent that it would be impossible to conduct comprehensive systematic database searches in all the bodies of knowledge we had covered in our original scoping and mapping exercise. Therefore, although we wanted a broader coverage than Clarke and Loudon,31 we decided to concentrate on the central question of whether or not research engagement (tightly defined to cover engagement in research rather than the broader engagement with research) led to measurable improvement in actual health-care processes and outcomes. The second half of this definition (‘measurable improvement in health-care processes and outcomes’) was also tightly defined. It excluded studies that considered whether or not research engagement led merely to steps along the path to improved performance (such as increased research utilisation, changes in attitudes towards research, or gains in efficiency), but did not go on to consider if these led to improvements in the quality of care.
After the initial mapping phase we also decided the scope of our review was already too wide to include the slightly separate but equally important topic of whether or not engaging the public and patients as partners in research improved health-care performance. 32 We recognise the importance of this issue, and also that there are ways in which patients can engage with research and then press their clinicians to provide what might be seen as research-informed treatments. Nevertheless, the steer in the original brief was to focus on health-care professionals and organisations. We also noted that the topic of patient involvement in health research was being covered by a separate review by Evans et al. , also commissioned by the NIHR SDO programme: Public involvement in research: assessing impact through a realist evaluation.
Considerable benefits from health research have been demonstrated using various approaches, including the Payback Framework. 33–36 We assumed, however, that we should interpret our research question to focus on whether or not any identified improvements in the performance of clinicians or health-care organisations had resulted in some way from the processes of their being engaged in research, whether or not that improvement occurred concurrently with the conduct of research or subsequently. In our search for papers it proved challenging to establish the precise inclusion criteria to enable us to focus on this interpretation, and we return to this issue at several points in this report. Nevertheless, our analysis of the papers from the focused review provided an answer to the question of whether research engagement improves performance.
Finally, in order to address the question of the mechanisms that might cause research engagement to lead to improved health care, we recognised that it would be important to draw on a wider range of papers than just those that met the inclusion criteria for the focused review. We therefore decided to supplement the focused review with a wider but more informal review of a more extensive range of papers. In the searches for the focused review we identified many papers, but finally included in that review only those that met our inclusion criteria (i.e. included empirical data and covered in a single paper the whole pathway from research engagement to improved health-care performance; see Chapter 3, Methods for more details). The starting point for the wider review was all the papers that reached the full-paper review step of the focused review, plus some additional papers that we had identified in the mapping exercises. Using criteria, again set out in Chapter 3, we identified a broad range of papers, and in particular included those that provided important descriptive or theoretical accounts, and/or provided evidence about progress along some of the steps between research engagement and anticipated improvement in health-care performance, and/or supported (or challenged) the findings from the focused review about whether or not engagement in research improves health-care performance. We drew on the papers from the wider review to supplement those from the focused review to enable us to conduct a fuller analysis of causation in the links between research engagement and performance by exploring evidence about potential mechanisms and about the theories underpinning their development and adoption.
A road map of the report
We have attempted to write this report as a coherent whole, but inevitably in addressing the complexity of the subject matter we have followed various theoretical and empirical pathways. Therefore, in Box 1 we provide a brief summary of what is covered in each of the remaining chapters.
Chapter 1 sets out the background to the review including our interpretation of the research question and of various key terms (research engagement, performance, and mechanisms), and the scope of the review
Chapter 2 describes the existing theories, reviews and analyses on which our review draws and uses them to develop a matrix to help us analyse the variety of mechanisms through which research engagement might improve health-care performance. Inevitably the relationship between the theories and the subsequent methods and data collection was iterative and so the matrix reported here is the final version after revisions during the review
Chapter 3 describes the methods used in what we have christened an ‘hourglass’ review that consists of three main stages: Stage 1: Planning and mapping; Stage 2: The focused review; and Stage 3: Wider review and report
Chapter 4 presents the findings from the focused review on the question of whether or not research engagement improves health-care performance
Chapter 5 analyses the relationships between research engagement and performance by drawing on empirical and theoretical evidence, including evidence on potential mechanisms, from both the focused review and the wider review
Chapter 6 draws on both the focused review and, especially, the wider review to analyse how organisational support for research engagement from health-care organisations can improve performance
Chapter 7 presents our discussion and conclusions, including implications for practice and recommendations for research
Chapter 2 Key theories, reviews and analyses – developing an analytic matrix
Key theoretical positions that informed the review
As noted, the ITT for this review requested a ‘theoretically and empirically grounded assessment’. Therefore, our initial planning and mapping exercise explored several major theoretical approaches that could, potentially, contribute to informing the conduct of the review and to building a framework within which to analyse the mechanisms through which engagement in research can improve performance. Here we are describing existing theories and reviews that count as part of the background to the project. The process by which we selected those existing theories and reviews, described below, and drew on them to help build the matrix, inevitably involved iteration and was informed by the Stage 1 planning and mapping exercise. Initially in the mapping stage we were as inclusive as possible. However, what drove the selection of these specific included theories was the decision (described in Chapter 1) to concentrate on research engagement as ‘engagement in research’ and not go for a wider version that would also have included ‘engagement with research’.
Research engagement as a way of increasing ability and willingness to use research: the theory of absorptive capacity, and theories about the characteristics of research adopters
Absorptive capacity
The concept of ‘absorptive capacity’ as described by Cohen and Levinthal in the context of industrial research37,38 (see also Rosenberg39) suggests that conducting research and development (R&D) within a firm helps that firm to develop and maintain its broader capabilities to assimilate and exploit externally available information from research. Further, they claim that ‘absorptive capacity may be created as a byproduct of a firm's R&D investment’ (p. 129). 38 Subsequent authors also used the concept in the analysis of industrial development, noting, for example, that ‘Japanese firms, which were importing many technologies from Europe and North America, were exceptionally good at the improvement of both processes and products but they were only able to do this because of their own strengths in R&D’ (p. 120). 40
The original theory could apply at different levels, including individual and organisational levels. Various authors developed the concept further in relation to organisations. In particular, Zahra and George41 broadened understanding of this concept beyond the original view that an organisation's engagement in research is the key issue. In 2006 Lane et al. 42 reviewed the use of the concept of absorptive capacity in the many subsequent articles claiming to be drawing on it. Lane et al. 42 suggest that much of the later writing has followed the approach of Zahra and George. 41 This has also occurred in the health field where, for example, a study exploring the capacities of health-care organisations to absorb research defined absorptive capacity in terms of environmental scanning, collection of satisfaction data, and the level of workforce professionalism. 43 The result is that the simpler interpretation of absorptive capacity being a by-product of research is sometimes overlooked.
Nevertheless, and returning to the original by-product concept, it can still be argued that when clinicians and managers in a health-care system are seen as stakeholders in the research system then their engagement in research can be a way of boosting their ability and willingness to use research from wherever it might originate. For example, drawing on parallels from the literature on industrial research, Buxton and Hanney44–47 include this increased capacity to use research as one of the benefits identified in their multidimensional categorisation of benefits from health research, and this approach is replicated in other health research impact assessment frameworks that build on the Payback Framework,48,49 and when the development of health research systems in low- and middle-income countries is being considered. 50 Part of the theoretical underpinning behind the idea that research engagement contributes to building absorptive capacity is the notion that, in practice at least, knowledge is not a pure public good but instead requires a level of understanding of research before it can be absorbed. 51 Such understanding can be built up by undertaking research.
Characteristics of research adopters
Rogers52 identifies 26 ways in which those who are early adopters of innovations differ from those who are later adopters of research, but being engaged in research is not included. Greenhalgh et al. ,30 adapting earlier work,53 list the personal characteristics of those who adopt innovations. Personal values are included in the list, and Greenhalgh et al. cite the work of one study54 that suggests actors adopting medical innovations include those motivated by values, for example ‘“academic” doctors feel the need to align with evidence from research trials’ (p. 111). 30
The above discussion relates to the absorption and/or adoption of research findings by a wider group of people than those who originally undertook the research. It is, however, relevant to consider if aspects of these theories also relate to those who produce specific research findings, and might then be in a position to adopt them in their regular practice. Both Rogers52 and Greenhalgh et al. 30 draw on the work of Ryan and Gross55 to set out a model of the stages in the process of adoption, and the first stage is knowledge (i.e. awareness of the innovation). However, although participating in research evaluating a specific innovation is a highly effective way of accessing knowledge about it, and this increased knowledge can be seen as a by-product of that participation, this point is not specifically made in the model (and clearly not everyone whom it is hoped will adopt an innovation could be involved in a trial).
The co-ordination of research engagement to enhance its effectiveness: a role of research networks
Research networks are increasingly important in several countries, including the USA and the UK. In the USA, practice-based research networks (PBRNs) have been in existence for several decades. They not only provide a means for community practitioners to become involved in research, but can also facilitate the ‘downstream’ dissemination and implementation of research results into community-based clinical practice by:
. . . enhancing providers' acceptance of research results and strengthening their commitment to acting on research findings. This level of participation also often encourages practice organizational changes that facilitate the research and allows participants access to more or earlier information compared with others who did not participate in the research.
p. 444156
As they mature, it becomes clear that research networks can have more than one goal. Some have evolved from structures whose primary purpose is to involve clinicians and health-care organisations in clinical trials to become structures that also aim to spread information about research and research findings. In addition, research networks can encourage members to identify research needs in order to make the research undertaken more relevant,11 and to this extent they use the collaborative approach described below.
In the UK, clinical research networks are a central pillar in the development of the English NIHR. 57 The objectives of the Clinical Research Network Coordinating Centre58 are more restrictive than those described above for the research networks in the USA and focus on aspects such as increasing the efficiency of clinical research processes and increasing the number of NHS trusts that are involved in clinical research. This might suggest that any improvements in health care that result from the research engagement would be far from the main purpose of the network, but in fact such impacts were identified as possible outcomes. The Cancer Research Network led the way because it was hypothesised that:
. . . if more patients could have access to late phase clinical trials, not only would their outcomes improve as they gained access to novel therapies, but the inculcation of a research culture across all hospitals delivering cancer care would also lead to a general improvement in the standard of care.
p. vii2959
This claim does suggest some theorising about how these networks are a mechanism through which such impacts might arise. It is also compatible with statements about the PBRNs in the USA. As such it is of interest not only to know what research networks can achieve, but also how they do so.
There are some overlaps between the thinking in the ‘stages of innovation’ model promoted by Rogers,52 and the basic role of research networks. The first step of the ‘stages of innovation’ model is that those adopting an innovation have to have knowledge about it, and then be persuaded to believe they should use it. We suggest that the knowledge and trust about an innovation that comes from being involved in research helps explain the adoption process. Rogers52 also argues that trialability and observability are important characteristics of innovations that are adopted. Although this might appear to be the same as having knowledge about an innovation, Rogers52 was here talking about characteristics of the innovation rather than the adopter. However, we believe this concept could be adapted to help explain how an increase in the number of clinicians undertaking research (perhaps through research networks) might increase adoption of evidence-based approaches. Thus, and to use and perhaps reinterpret terms used by Rogers52 when describing the characteristics of innovations, research networks can promote ‘trialability’ (here defined as giving clinical staff and health-care organisations the opportunity to gain experience of an innovation on a limited basis in the context of a clinical trial) and ‘observability’ (here defined as giving clinical staff and health-care organisations the opportunity to observe the impact of innovations in other parts of the network).
Research engagement as a way of ensuring the research produced has more likelihood of being used to improve the health-care system: theories of collaboration and of action research
Under this heading we examine several partially overlapping theories that focus on ways of engaging potential users from the health-care system in aspects of research processes with the aim of improving the relevance of research to users, and increasing the likelihood of research being used.
Collaborative research
In 2003 Denis and Lomas60 traced the development of a collaborative approach to research in which researchers and potential users work together to identify research agendas, commission research, and so on. They identified the 1983 study by Kogan and Henkel61 of the R&D system of the health department in England as a key early contribution. According to Kogan and Henkel61 a critical feature of the collaborative approach, as applied to research on policy-making, is that researchers and policy-makers should join forces ‘to identify research needs against policy relevance and feasibility’ (p. 143). They developed their theories about the collaborative approach not only from their own extensive 7-year formative evaluation of developments in the health department's R&D division, but also from models of social research impact developed by Carol Weiss,62 including an interactive model which they claimed reinforced the notions that the boundaries between science and society are permeable.
Work to analyse and operationalise this collaborative approach has been undertaken in the Canadian Health Services Research Foundation (CHSRF), led by Jonathan Lomas. 63,64 His concept of ‘linkage and exchange’ between users and researchers has been particularly influential. The belief that collaborative research is more likely to be relevant to potential users is a key concept. Collaborative approaches to research somewhat mirror the concept of ‘Mode 2’ research (i.e. research that is context-driven, problem-focused and interdisciplinary). 65,66 The applicability of ‘Mode 2’ concepts to health research was analysed by Ferlie and Wood. 67
One early and successful example of developing and promoting collaborations between researchers and health-care staff to maximise engagement in research, and encourage wider use of the findings of research, was the post-war childhood leukaemia trials in the UK (and the USA). However, as Moscucci et al. 68 demonstrate, this success was very dependent on the context in which this collaboration developed at the time. But, in addition to being applied in one-off situations, there are now efforts to go further than the institutional developments described by Kogan and Henkel61 and build long-term collaborative approaches that, it is hypothesised, will lead to research that is more likely to meet needs of a health-care system, and hence be used and improve performance. (Some of these are described in Chapter 6. )
Greenhalgh et al. 30 also build aspects of the collaborative approach into their model for considering the determinants of diffusion of innovations in the organisation and delivery of health services. Again drawing on the work of Rogers,69 they suggest that successful adoption of an innovation is more likely if developers or their agents are linked with potential users at the development stage in order to capture and incorporate the user perspective.
The collaborative approach is generally more interventionist than the absorptive capacity concept, but sometimes they can be brought together. In current initiatives and writings about the health research system there is often overlapping emphasis: first, on collaborative working with users of research so as to produce applied research that is more likely to be adopted by potential users because of its relevance to them; and second, on engaging users in the wider research enterprise in a variety of ways and through various networks to encourage the uptake of research findings more generally. This second approach can be interpreted as an attempt to build absorptive capacity. In 2008 Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) were introduced by the English NIHR to boost the effectiveness and translation of health research, and the following definition of CLAHRCs70 highlights their complex remit:
The collaborations have three key interlinked functions: conducting high quality applied health research; implementing the findings from this and other research in clinical practice; and increasing the capacity of local NHS organisations to engage with and apply research.
p. 10870
Initiatives such as the CLAHRCs also illustrate how collaboration can work at various levels, with one aim of collaboration at an organisational level being to help foster collaboration at other levels within the participating bodies. (We discuss this further in Chapter 6. )
Action research
The notion that research users should be closely involved in research activity in order to produce research that meets their needs also overlaps with theories underpinning action research, or participatory action research (PAR). The action research model was first conceptualised by Lewin71 and has been described and developed in much subsequent writing. It consists of a spiral of cycles of action and research, with four phases: planning, acting, observing and reflecting. The aim is to enable investigators to link theory with practice using a participatory and consensual approach towards investigating problems and developing plans to deal with them.
A realist review published in 2012 by Jagosh et al. 72 looks at the benefits of participatory research and examines the coconstruction of research through partnerships between researchers and people affected by and/or responsible for action on the issues under study. Building on the work of Lasker et al. ,73 they adopted the theory of ‘partnership synergy’ to hypothesise that equitable partnerships with stakeholders' participation throughout the project succeed largely through synergy.
Research engagement at an organisational level to improve the performance of the health-care organisation
Given the wide scope of our original brief and the diversity of the fields of study involved, our decision to focus on engagement in research rather than also include engagement with research was necessary to give our review sufficient focus. However, there are limitations to this approach. One of these is that it cuts across some of the major theoretical approaches that underpin efforts to improve organisational performance in health care, such as promoting learning organisations, adopting an organisational approach to quality improvement (QI), and knowledge mobilisation. However, we found that in all these fields, research-related discussion tends to be largely about research utilisation, EBP and translation gaps: research engagement as a term is rarely used, and further refinements of that term not considered.
These issues are illustrated in a recent review of knowledge mobilisation and research utilisation by Crilly et al. 74 They note that there is a well-established literature on the utilisation of clinical evidence in health care, but that there has been less consideration of the utilisation of management evidence and research by health-care organisations. Their review aimed to redress this imbalance and paid particular attention to the management literature, identifying 10 large domains or thematic categories in that literature. Their subsequent search of the health and social science literature identified two further domains: the evidence-based movement and ‘super structures’ – defined as ‘the infrastructure of institutions and funding that commission health-care research’ (p. 45). However, it was only in relation to this last domain that this large review found any literature in which research and engagement were considered together as a means to promote change. In a discussion of the various ‘structures’ developed by the English health R&D system to boost the translation of research, Crilly et al. 74 quoted the Department of Health's High Level Group on Clinical Effectiveness recommendation about increasing the effectiveness of clinical care through promoting the development of new models to encourage relevant research, engagement and population focus and embed a critical culture that is more receptive to change. 75
Crilly et al. 74 also made a further and more general point. They noted that although learning processes and their relationship with organisational design emerged as an important theme in their review, the whole question of organisational form had received little attention in health-care literature, despite major reorganisations designed to promote bench to bedside research translation and organisational learning. We were therefore especially interested in our own review to identify health-care organisations that have given particular attention to engagement in research in their overall approach to improved performance, and explore the theoretical underpinnings of those approaches.
We found several examples, many of them from the USA. Thus, the Veterans Health Administration (VHA) in the USA included various aspects of research engagement as part of a comprehensive re-engineering exercise designed to improve the quality of health care provided. 76 According to a former Undersecretary for Health at the United States Department of Veterans Affairs (VA), the fact that VA investigators are nested in a fully integrated health-care delivery system with a stable patient population that has an exceptionally high prevalence of chronic conditions provides them ‘with unparalleled opportunities to translate research questions into studies and research findings into clinical action’ (p. I–9). 77 One high profile way in which this integration has been achieved is through the Quality Enhancement Research Initiative (QUERI) programme, which was launched in 1998 to accelerate the implementation of new research findings into clinical care by creating a bridge between those performing research and those responsible for health systems operations. In a recent evaluation of QUERI, Graham and Tetroe78 highlight how this comprehensive approach has successfully incorporated some of the theoretical approaches described above, referring to a ‘paradigm shift to an action-oriented approach that meaningfully engages clinicians, managers, patients/clients, and researchers in research-driven initiatives to improve quality’ (p. 1). This shift, they claim, is towards coproduction of knowledge or ‘Mode 2’ knowledge production.
In QUERI, and in other complex initiatives such as the CLAHRCs in the UK (the ‘community wide academic health centres’ referred to above), various theoretical approaches overlap, and detailed evaluations of such initiatives provide useful pointers to the different mechanisms through which research engagement might lead to improved health care. (We return to these issues in Chapter 6. )
Previous reviews and analyses
The findings from a range of previous reviews are relevant to our evidence synthesis to varying degrees and shaped our thinking about its scope. As we discuss these previous reviews below, we attempt to assess their relevance to our own review, but even the most relevant were not included in our focused review because of the danger of double counting.
Practitioner or institution participation in clinical trials
The systematic review by Clarke and Loudon published in 201131 has already been mentioned and is important for several reasons. First, its findings can be interpreted as highlighting the need for a further focused review to update, and if possible expand, the search in an attempt to identify if there is sufficient evidence to come to a somewhat firmer conclusion about whether or not research engagement is likely to improve health-care performance. Second, it addresses the boundary between studies on practitioner or institutional participation and those exploring whether or not patients who are involved in trials achieve better health outcomes.
Patient involvement in trials
Previous reviews, noticeably by Vist et al. ,79 indicate that, on average, ‘the outcomes of patients participating and not participating in RCTs were similar, suggesting that participation in RCTs, independent of the effects of the clinical interventions being compared, is likely to be comparable’ (p. 2). However, an earlier review by Stiller80 concluded that: ‘referral to a specialist centre or to a hospital treating many patients with the disease, or inclusion in a clinical trial, is often linked with a higher survival rate for the cancers which have been studied’; and he went on to suggest that there was ‘no evidence that centralised referral or treatment according to protocols leads to lower survival rates’ (p. 360).
These reviews are of particular importance to our evidence synthesis because they provide a complex, but partial, mirror image to the evidence about whether or not research engagement by clinicians and organisations leads to improved health care. Their unit of analysis is patients, whereas the reviews by Clarke and Loudon,31 and by ourselves, examine the care provided by clinicians and by institutions. However, there are potential overlaps, for example, Vist et al. 79 claim that the differences in care that were identified in some studies ‘might be due to differences in adherence to a protocol by participating clinicians’ (p. 4). This is another reason why we had to be very careful in devising the inclusion criteria for our study.
Health outcomes in teaching hospitals
There is a growing literature on whether or not the patients treated in a teaching hospital have better health-care outcomes than those treated in non-teaching hospitals. If this was shown to be the case, then it could potentially be important evidence about whether or not research engagement leads to improved performance. Although most research-intensive hospitals are likely to be teaching hospitals, teaching hospitals are not necessarily research-intensive. This makes it difficult to interpret the findings. Furthermore, the largest systematic review we found81 – of 132 PubMed studies – identified a great deal of variability in the results between the studies but that overall no link could be confirmed. Also, although some papers did find a positive relationship, many warned about the difficulty of coming to any clear conclusions about this correlation given the large number of confounding factors. 82,83
Medical academics
The idea of medical academics being motivated to engage in research to improve the health care of their own patients is a powerful one. This most often takes the form of clinicians leading or participating in traditional clinical trials. In some instances, however, clinicians set out to conduct research not to test the general efficacy of one therapy compared with an alternative approach, but rather to identify the best treatment for an individual patient through multiple crossover studies conducted in single individuals. A review of such N-of-1 trials84 concluded that they ‘are a useful tool for enhancing therapeutic precision in a range of conditions and should be conducted more often’ (p. 761). Despite this conclusion, such trials are resource-intensive and unlikely to occur in more than a small minority of cases.
Collaborative approaches
The reviews of Clarke and Loudon31 and of Vist et al. 79 focus on trials of new therapies, and do not include papers describing specific interventions such as collaborative approaches. Among the reviews that have explored these approaches, Lavis et al. 85 concluded that, taking study design, study quality and consistency of findings into consideration, the most rigorous statement they could make about factors that encourage the use of research by health managers and policymakers was that ‘interactions between researchers and health care policy-makers increased the prospects for research use by policy-makers’ (p. 39). This finding came, however, from a relatively small number of papers, and it is not clear how far the interactions were collaborative prior to and during the research activity and would, therefore, meet our inclusion criteria for engagement in research. Focusing specifically on health policy-making, Innvær et al. 86 reviewed the literature on factors that led to research utilisation and showed that personal contact between researchers and policy-makers was the factor most often identified. Again, it is not clear how far this represents policy-maker engagement in agenda setting and the processes of research, or how far it is restricted to dissemination activities and is therefore engagement with, but not in, research.
Action research and participatory action research
Through action research health-care professionals (often nurses or therapists) work with researchers to explore locally identified problems. Given the nature of the process, it is not easy to maintain the distinction between these forms of research and direct implementation/improvement initiatives. Several reviews have examined action research but have found that the impacts on patient care and health outcomes are often not covered. For example, Soh et al. 87 reviewed action research studies in intensive care settings but of the 21 studies included only one described improved health-care outcomes. 88 Munn-Giddings et al. 89 reviewed studies of action research in nursing published between 2000 and 2005. Munn-Giddings et al. 89 identified 62 studies, but ‘Only 13% could be defined as having their focus on a direct impact on patient care’ (p. 474).
Assessing research impact
A systematic review of previous attempts to assess research impact found comparatively few comprehensive categorisations of the impacts of health research that attempted to include any analysis of whether or not research engagement led to improved health care as a by-product. 33 The studies that had considered this were mostly limited to a few of the studies that used the Payback Framework in which research engagement leading to benefits as a by-product is one of the explicit categories of impact.
Assessing research utilisation
Some reviews cover the step in the pathway from research engagement to research utilisation. These are often more about engagement with research than engagement in research. For example, there are extensive reviews of knowledge transfer and exchange,90,91 diffusion of innovations,29,30 and research utilisation and knowledge mobilisation. 74 All of them cover aspects of researchers interacting with potential research users, which can in some cases, but not all, overlap with the collaborative approach. Mitton et al. 90 start their review by stating that: ‘Knowledge transfer and exchange (KTE) is an interactive process involving the interchange of knowledge between research users and researcher producers’ (p. 729). Some reviews of research utilisation include consideration of whether or not engagement in research is important in encouraging research utilisation, but the findings are often inconclusive. For example, a review of studies of research utilisation by nurses found that only 13 out of 55 papers examined research participation as a possible factor. 92 Although these studies described various forms of research involvement and a total of 13 individual characteristics were identified, each one of these characteristics (including the one most related to our concept of engagement in research) was assessed in fewer than four studies. This, the authors claimed, prevented them ‘from drawing conclusions on the relationships between individual characteristics typical of involvement in research activities and nurses' use of research findings in practice’ (p. 12).
Additional analyses published in 2011
The final body of work we cover in the ‘background’ is a supplement to the Annals of Oncology that was published in late 2011 during the course of our own review. This supplement built on the work of a 2009 workshop addressing the question of ‘The impact of the process of clinical research on health service outcomes’. 93 In the introduction, Selby93 notes that the Vist et al. review79 (cited above) examined a ‘substantial literature’ evaluating the benefits for individual patients of trial participation, but that there was a lack of convincing evidence. Of more relevance to our own review is the conclusion of the workshop that there is an even less extensive literature on the impact of research activity on the quality of health-care outcomes within research-active institutions and health-care systems in general and that there ‘is a pressing need for more research in this area’ (p. vii3).
The focus of the papers in this Annals of Oncology supplement varied. In setting out the key question the workshop thought should be addressed in future research, Pater et al. 94 claim that their question:
. . . has been deliberately phrased to indicate our concern with benefits that occur contemporaneously with the research activity. We do not directly address the question of whether institutions that participate in a particular research project are more likely to implement the results once they become known.
p. vii5894
Other papers in the supplement did, however, take a broader perspective and began to analyse some of the factors that might cause research engagement to lead to health-care improvement. 95 (We draw further on this set of papers in later chapters.)
Learning from previous studies and developing a matrix for analysing the potential mechanisms
These various theoretical perspectives and systematic reviews provided an important starting point for our own review. Taken overall they highlighted five key points:
-
There appeared to be no firm answer to the question of whether or not research engagement does improve health-care performance.
-
Research engagement is often subsumed within larger, more complex interventions.
-
Studies often focus on steps along the pathway from research engagement to health-care outcomes and not the whole pathway.
-
There are many studies that might contribute to a synthesis of possible mechanisms by which research engagement might improve health care, but also a large number of disparate activities described.
-
There would be value in developing a matrix to help organise the data analysis, categorise studies and unpick which mechanisms operate in the very different circumstances described by the various papers.
From the above accounts we identified two main dimensions along which to categorise studies that assess whether or not research engagement leads to improved performance. We have called these two dimensions the degree of intentionality and the scope of the impact (Table 1).
Broader impact (e.g. as a result of their research engagement, health-care staff are more willing/able to use research produced by others to improve health care) | Specific impact (e.g. the findings from studies are used more rapidly and/or extensively by those involved in the research engagement that led to them) | |
---|---|---|
Least intentionality – by-product | Research engagement involves participating in research studies, and the absorptive capacity concept could be important in helping explain the ability to adopt research conducted elsewhere | Research engagement involves participating in research studies, and the mechanisms would be associated with knowledge, etc., about the specific research and its conduct |
Some intentionality – research networks | Research engagement involves network membership which can include regular participation in research studies and, in addition, network communication about the conduct and findings from research studies in the field covered by the network | Research engagement involves network membership which can include regular participation in research studies and network communication about the conduct of those specific trials |
Greatest intentionality – interventions | Research engagement here might include programmes to give clinicians/managers/policy-makers some experience of identifying research needs and participating in research. As a result they might then be more willing/able to use research from wherever it comes | Research engagement here can involve collaborative research, QI research initiatives, participatory and action research, and aspects of organisational initiatives. As a result the specific research produced is more relevant to the needs of those involved and they are more likely to use it |
The first dimension involves the degree of intentionality in the link between research engagement and health-care performance. Inevitably there are overlaps, but three main categories could be identified and viewed as forming a spectrum:
-
The end of this spectrum involving least intentionality includes the improved health-care performance resulting from engagement in research that can be categorised (to adapt Cohen and Levinthal's term37,38) as a by-product of a trial or other research which has been conducted to test a specific therapy or approach. This does not of course mean the improvement happens by accident, but rather that the production of these by-product benefits was not the primary aim of the research study in which the clinicians were engaged.
-
Research networks can broadly be seen as somewhere near the middle of this spectrum. They exist primarily to promote the conduct of research on specific therapies in a particular field, but often have additional wider remits of informing network members about the conduct of, and findings from, research in a field more generally, and addressing the identified research needs of network members.
-
The other end of the spectrum involves greatest intentionality: here there are various interventions such as the collaborative approach, participatory and action research, and organisational approaches where the intention is explicitly to produce improved health-care performance as a direct consequence of research engagement.
The second dimension in the proposed matrix is the scope of the impact made by research engagement. Here there are two categories, broader and specific, and the studies of these focus on different issues. Studies included in the ‘broader impact’ category take various forms, including:
-
studies assessing whether or not those clinicians and health-care organisations who engage in research are more likely than those who do not engage in research to apply the findings of research studies conducted elsewhere
-
studies assessing the more general health-care performance of clinicians, teams, and organisations who engage in research, and comparing the performance with those who do not.
The ‘specific impact’ category covers two circumstances:
-
studies assessing whether or not improved health-care performance arises as a result of the subsequent rapid application of the findings from the specific research by those who engaged in the research that produced the findings
-
studies assessing whether or not improvement in health-care performance arises concurrently with the specific research activity (or subsequently) as a result of those conducting the research applying the evidence-based processes/protocols (but not the intervention regimens) associated with the research activity to a broader range of patients than just those participating in that particular research project.
The difficulties of applying the matrix include the fact that some of the papers in our focused review have features that fit into more than one category on a certain dimension. Nevertheless, it is important to attempt to make such categorisations because of the potentially very different mechanisms that may be at work in these different circumstances on the two dimensions.
Chapter 3 Methods
The review involved the three stages set out in Figure 1. This is described as an hourglass review to reflect the scope of the conceptual analysis and the number of papers considered in detail (rather than the sheer volume of titles reviewed) at each stage. The three parts of the review were a broad mapping stage, followed by a focused or formal review on the core issue of whether or not research engagement improves health care, and a final stage which involved an exploration of a wider literature to help identify and describe plausible mechanisms.
We consulted an international expert advisory group and patient representatives at key points in the review process. Given the international nature of the advisory group the consultation was mostly by e-mail. However, we met face-to-face with the patient representatives.
Ethical approval was obtained from Brunel University's Research Ethics Committee.
Stage 1: planning and mapping
We made the initial scoping and planning phase as wide as possible in an attempt to ensure we captured any coherent bodies of empirical evidence relating to the question and any plausible mechanisms. We examined a large number of bodies of knowledge as listed in Box 2.
Twelve bodies of knowledge were explored in the initial mapping stage:
-
the organisation of health research systems and of collaborative approaches to the conduct of health research including linkage and exchange
-
the role of in-house industrial research, including concepts such as absorptive capacity
-
diffusion of innovations/absorptive capacity in health-care systems
-
performance at clinician, team, service and organisational level in health-care organisations, approaches to the measurement of such performance, and overlaps with the concept of medical engagement
-
the role of medical academics in translational research
-
assessing the impacts of research and how the utilisation of research improves health-care performance
-
quality improvement
-
knowledge mobilisation and management, and knowledge transfer
-
organisational processes and learning organisations
-
medical education, continuing professional development, and critical appraisal
-
evidence-based medicine
-
the role of patient engagement in research
For this exercise we drew on our existing knowledge, team meetings and brainstorming sessions, and consultation with the advisory group. We also started with an open mind about the types of research on research that might have addressed our question, following the Institute of Medicine's definition of health services research as a: ‘multidisciplinary field of inquiry, both basic and applied, that examines the use, costs, quality, accessibility, delivery, organization, financing, and outcomes of health care services to increase knowledge and understanding of the structure, processes, and effects of health services for individuals and populations’ (p. 3). 96
Our initial explorations presented us with a dilemma. Discussions with the project's information scientist confirmed that it would be impractical to conduct a focused search of all the bodies of knowledge that might have something relevant to say on the topic. Yet, as far as we could tell, none of them appeared to contain a sufficiently large number of relevant papers to make it sensible to focus explicitly on that area in order to explore the various mechanisms involved.
We therefore decided to extend the initial stage to enable us to map the field as widely as possible so as to inform the later more detailed database search. This mapping phase continued the approaches described above, plus hand-searching of journals, searching of relevant web sites, and searching the Effective Practice and Organisation of Care Cochrane database. The journals on which we conducted a preliminary hand search at this stage specialise in aspects of the relationship between research engagement and improved health-care performance. They included: Journal of Health Services Research & Policy; The Milbank Quarterly; Evidence & Policy; Implementation Science; and Health Research Policy & Systems. Preliminary internet searches were conducted on the following websites: English Department of Health; NIHR; National Institute for Health and Care Excellence; World Health Organization; numerous Canadian health research organisations (including CHSRF); and the University of Birmingham Centre for Health Services Management library.
Papers considered to be particularly relevant for the study were given a designated ‘KEY’ status, and we used snowballing to explore further potentially relevant references cited in these papers.
The findings from this informal but extensive searching were used to develop initial maps of each of the bodies of knowledge from the diverse range listed above, and to inform the search terms used in the next stage – the focused review.
Stage 2: the focused review
The search strategy
The focused, or formal, review concentrated on the specific question of whether or not engagement in research improves health-care performance. We wanted a comprehensive search of as many databases as possible. The search terms we used for the MEDLINE searches are given in Appendix 2. The search terms were similar for the other databases (modified to meet the requirements of each). We were seeking to identify empirical research studies where the concept of ‘involvement in research’ was an input and some measure of ‘performance’ was an output. We included the VHA as a search term because our initial mapping, and expert advice, highlighted the importance of the VHA as a rare example of an organisation that had explicitly attempted to integrate research fully into its operations in order to improve health-care performance. The initial broad interpretation of our terms was tightened as we progressed through the review.
The search strategy covered the period January 1990 to March 2012 as the mapping phase suggested that this was the most fruitful period for addressing the review topic. We used English-language terms, although identified papers not published in English were considered for inclusion, and consideration was given to terms used in other English-speaking countries (e.g. the use of the term ‘community’ in North America can be noticeably different from its use in the UK). We sought papers containing empirical data from a whole range of research approaches, both quantitative and qualitative, in line with our broad interpretation of health services research. Our search was not, therefore, limited to clinical trials. We searched MEDLINE, EMBASE, PsycINFO, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Web of Science, Applied Social Sciences Index and Abstracts (ASSIA), British Nursing Index, Health Management Information Consortium (HMIC) and System for Information on Grey Literature in Europe (SIGLE) databases. The search strategy was developed by members of the research team and a senior information scientist from King's College London. These database searches were supplemented with more focused hand-searches of the five journals initially searched in Stage 1 (as listed above), papers suggested by our expert advisors and patient representatives, further searching of several national and international websites (as listed above) and snowballing of papers considered to be key for the discussion. Searches were conducted by an information scientist working closely with the review team.
Steps in the focused review
First step – title review
This step involved examination of the title of each paper, and occasionally the abstract when the title provided too little detail, to quickly exclude documents clearly not relevant to the review. The predominant aim here was to be inclusive, only excluding papers clearly not relevant. Reasons for exclusion at this step were: not health related, not a human study, no mention of research (or related terms), no clinical outcomes or processes. At first papers were reviewed by two reviewers independently, but this was reduced to one reviewer after a short time as the numbers of abstracts to be studied was large and a test indicated that the agreement between the reviewers was considered to be satisfactory.
Second step – abstract review
In the second step of the review, we studied the titles and full abstracts in greater depth to assess the eligibility of each paper that had not been excluded at the title review. A first reviewer conducted this exercise and then passed the paper (and, where appropriate, comments) to a second reviewer. The aim of the first reviewer was to be inclusive: the aim of the second reviewer was to be more selective. Where the two reviewers disagreed they met to discuss the title and abstract. If agreement was still not possible then the paper was taken through to the third step of the review for a study of the full paper, along with the papers where there was agreement on inclusion.
Reasons for exclusion were: not health related, not a human study, no mention of engagement in research (or related terms), no clinical outcomes or processes. Reasons for inclusion were: mention of engagement in research or of research in combination with collaboration, multicentre, organisational, or other related terms, mention of clinical outcomes or processes in the form of empirical data.
Third step – full-paper review
The third step was a further relevance and initial quality check of all the included papers from the second step to determine which papers were suitable to proceed through to the data extraction stage.
Determining the inclusion criteria for this step was complicated because the special relationship between research engagement and improved health care had to be demonstrated in some way in the included papers. So, for example, in relation to clinical research, just because researchers who had been involved in a particular trial were now using the findings of that trial was not, by itself, sufficient. Instead, and as far as possible, we attempted to include only studies that examined in some way whether or not those clinicians/institutions who had been engaged in the research were adopting the findings more rapidly and/or extensively than other clinicians/institutions, i.e. we were looking for some measure of control within the study. For collaborative and action research, slightly different considerations had to come in to play because, by the very nature of the research, it was intended to be most relevant for those engaged in the research.
During the earlier parts of the review we identified some potentially important papers describing activities such as participation in research networks or action research that we considered to be a form of engagement in research and that in some instances seemed to lead to improved health care. Therefore, while we had, as noted, defined research engagement quite tightly to refer to engagement in research, we also wanted to make sure we captured the full range of activities that might come under that term and not restrict ourselves to looking just at clinical trials as Clarke and Loudon31 had explicitly done. Therefore, to add precision to our inclusion criteria, we explicitly set out some of the activities that we believed could be considered to be included under the heading ‘engagement in research’. Our final inclusion criteria are set out in Box 3.
Our final inclusion criteria were:
(a) Includes empirical data
(b) Explicitly includes engagement in research in any way including:
-
agenda setting
-
conducting research
-
action research
-
research networks where the research involvement is noted
OR, implicitly includes engagement in research through membership of a research network and, even though participation in a specific study is not noted, there is a comparison of health care between such settings and other settings (could be the comparison shows a difference or no difference)
BUT NOT
-
solely engagement with research, for example continuing medical education, evidence-based medicine, implementation efforts, critical appraisal, etc.
-
patient engagement in research
(c) Includes assessment of health-care processes/outcomes including, for example, use of clinical guidelines
BUT NOT
-
just research utilisation; or
-
just adoption of research in policy-making, with no follow through into improved health care in organisations or services; or
-
just improvements such as staff satisfaction or morale
We applied broadly similar inclusion principles across all categories of papers, and, where possible, reflected the spread of approaches we saw in the literature by including studies in organisational settings, and collaborative and participatory studies. This meant, for example, seeking to include studies that made some attempt to show that the use of the findings from engagement in collaborative or action research resulted in improvements in health-care performance, and that the clinician/institution behaviour was sustained beyond the period of the intervention. In other words, we attempted to distinguish a sustained impact from a more temporary study effect. Ideally such studies would also show some evidence of differential uptake of findings by the clinicians/institutions involved in the research, as measured against control groups not involved. But we found that this was rarely studied: collaborative or action research is often undertaken in response to the specific needs of the clinicians/institutions engaged in that research, and frequently does not include any control.
All three reviewers agreed on the papers taken through to the final data extraction stage of the review, and a data extraction sheet was completed by one reviewer for each of these papers. A quality check was informed by checklists available as part of the Critical Appraisal Skills Programme or similar, but the diversity of methods used in the papers meant that no one quality appraisal tool could be rigidly applied.
The number of papers involved at each step of the focused review is set out in Figure 2.
Analysis in the focused review
The papers in our focused review are even more diverse than the 13 papers in the Clarke and Loudon review,31 which did not involve a meta-analysis because the papers were too heterogeneous. Therefore, we too decided not to conduct a meta-analysis, but instead to provide an account of each paper in tabular form. Each paper that reached the final data extraction stage was also analysed in relation to:
-
Its importance to this review based on quality (especially the level of control in the study), size of the study and relevance to our review question.
-
Whether the findings were positive (showing research engagement did improve health care) or negative (showing no positive impact) or mixed. Under this interpretation, a ‘negative’ finding did not necessarily mean that health care worsened, it might have remained unchanged over the course of the study. Some papers provided mixed data about improvement that were inconclusive and difficult to interpret. Findings that were partially positive and partially inconclusive we labelled ‘mixed/positive’; findings that were partially negative and partially inconclusive we labelled ‘mixed/negative’.
-
The degree of intentionality of the link between research engagement and health-care performance (by-product, research network, or intervention).
-
The scope of the impact made by research engagement (broader impact/specific impact).
-
The level of engagement discussed (clinician or organisational). We initially intended to analyse papers according to the four levels of engagement mentioned in the ITT – clinician, team, service or organisational – but eventually used the two levels of clinician and organisation because, at levels above that of individual clinicians, there is little consensus about the reporting terms used and we could not readily apply the separate categories of team, service and organisation.
Finally, each of the papers was examined to identify any factors that the authors were proposing as possible causes of the improvement in health-care performance.
This analysis was supplemented by the wider review described below.
Stage 3: wider review and report
The final stage was an informal wider review. This was primarily intended to contribute to a fuller understanding of the relationship between research engagement and improved health-care performance, and, in particular, to help us identify and explore the mechanisms through which research engagement can improve health care. It was intended to build on relevant theories and supplement the focused review. The papers considered for this wider review included all the papers from the full-paper review stage of the focused review, plus papers considered potentially relevant from the initial mapping, and ongoing snowball exercises but which had not been included in the focused review. The papers that were additional to the 33 finally included in the focused review were interrogated and sorted into groups according to the theoretical approaches outlined in Chapter 2 and the emerging categories of mechanisms. At this stage many papers were excluded from further consideration as they were not relevant to the issues being reviewed.
The remaining papers in each category were reviewed in an attempt to identify any that met one or more of the following criteria:
-
despite not meeting the full inclusion criteria for the focused review, nevertheless illustrated positive or negative findings about the impact of research engagement on performance, especially on aspects about which there was a dearth of evidence from the focused review
-
had at least reasonably strong empirical data describing progress some way along the pathway from research engagement to improved health-care performance
-
provided a strong descriptive account of initiatives involving mechanisms through which some form of research engagement might improve health-care performance
-
were relevant theoretical and/or review papers that helped illuminate the issues.
We then used papers identified through this process to help provide a fuller understanding and a context for our findings from the focused review about whether or not research engagement improves health-care performance, and to assist exploration of the suggested mechanisms through which this might happen (see Chapters 5 and 6).
To produce the full report, team members took the lead in drafting different sections of the three chapters of this report (see Chapters 4–6) which contain the results of the evidence synthesis. Each chapter was then reviewed by all team members.
Chapter 4 Results from the focused review
Thirty-three papers were finally included in the data extraction of the focused review. 97–129 They included 12 of the 13 papers in Clarke and Loudon's review,97–102,104,105,108–111 and an additional 21 papers,103,106,107,112–129 including 10 published since the beginning of 2009114–119,121,123,126,128 when Clarke and Loudon conducted their search. We had a broader remit and conducted our search in a different way, but we were able to benefit from knowing which papers they had included and ensuring that our search terms captured as many of these papers as possible.
Clarke and Loudon reported: ‘We were unable to do a meaningful meta-analysis within any of the categories because of heterogeneity across the individual studies’ (p. 3). 31 This was an even greater problem for us as our additional papers added even more heterogeneity. The matrix described in Chapter 2 is an initial attempt to provide us with a way to explore this heterogeneity.
A summary of the papers is presented in Table 2, which describes a range of information for each study, including the clinical area, research question, methods, outcomes, and further notes for each study. It also includes information about the first four of the five dimensions on which we analysed the papers, as set out in Chapter 3. These were: importance; whether the study was positive (or mixed/positive) or negative (or mixed/negative); the degree of intentionality; and whether the impact was broader or specific.
Paper, location of research and importancea | Clinical area and research question | Method (including size of study) | Outcomesb | Notesc |
---|---|---|---|---|
By-product papers | ||||
Adler, 197897 UK 2 |
Inguinal hernia and varicose veins Does clinical practice change in the area conducting a RCT? |
Review of hospital activity analysis data for one small health district and two neighbouring ones: no details given on number of patients | Positive Significant reduction (almost 50%) in mean length of stay recorded in the study area between 1970 (the year before the start of the study) and 1975 (2 years after). This change was not found in two adjacent health districts |
Conclusions: ‘the research findings played a part in
changing clinical practice in the study area’ (p.
146) Very early study By-product/impact of specific (trial) research |
Andersen et al., 200698 Denmark 1 |
Asthma To determine how conducting a company-sponsored clinical trial influenced physicians' adherence to international treatment recommendations and their prescribing of the pharmaceutical company's drugs |
Observational retrospective cohort study comparing 10 practices conducting a trial on asthma medicine (5439 patients) with 165 control (non-trial-conducting) practices (59,574 patients) | Negative The baseline proportion of asthma patients using inhaled corticosteroids was 68.5% in trial-conducting practices and 69.1% in control practices. Conducting the trial did not influence guideline adherence (OR after 2 years 1.00, 95% CI 0.84 to 1.19). There was a slight increase in prescribing of the pharmaceutical company's drugs |
‘The observed lack of impact on guideline adherence (i.e.
treating asthma patients with inhaled steroids) should be
interpreted with caution. . . a
“ceiling effect” should be considered. If there is
no place for improvement, even the best intervention appears to be
ineffective’ (p. 2763) Above comments could possibly be analysed in relation to Rogers S-curve because research engagement might make more impact in relation to ‘early adopters’ than to ‘laggards’ By-product/negative on impact of specific (trial) research |
Chen et al., 200699 USA 2 |
Laryngeal cancer How was treatment of advanced laryngeal cancer influenced by the 1991 randomised study demonstrating the value of non-surgical therapy (chemoradiation)? Three questions: Were there changes in practice? Did the 1991 publication accelerate the trend? Did treatment trends vary by facility type? |
Patterns of care study using National Cancer Database (1985–2001) for 74,000 patients broken into two periods. Data analysed separately for three types of facility: community hospital, community cancer centres and teaching/research facility | Positive Percentage of advanced-stage patients treated with chemoradiation increased from 8.3% to 20.8% whereas the proportion treated with radiation alone decreased from 38.9% to 23.0%. Use of chemoradiation increased at a significantly faster rate after the 1991 publication at both community cancer centres and teaching research facilities |
‘Possible reasons for lower rates of chemo-RT [radiotherapy]
and higher rates of radiation alone at community hospitals and
community cancer centres compared with teaching/research facilities
may be lower awareness of treatment advances, lack of
multidisciplinary expertise or availability of specific treatments
at the facility, or referral bias’ (p. 837) Other studies suggest there might be much more of an effect from facilities that conduct research than from those that are teaching institutions, therefore a bigger impact might have been identified had the research facilities been separately analysed By-product/broader impact |
Clark et al., 2003100 Canada 2 |
Apheresis To determine whether or not initiation of a RCT leads to increased use of the experimental therapy outside the trial |
Data on national apheresis use during three Canadian RCTs for MS, TTP, and MCN were obtained from 19 major medical centres in Canada. Also initial and follow-up questionnaires sent to 24 Canadian physicians in trial and non-trial centres seeking views on an increase in apheresis activity | Mixed/negative During all three RCTs, there were large increases in use of apheresis. The majority of the increased use of apheresis was outside trials: for MS, 30 of 49 patients per year (61% of increase); for TTP, 49 of 56 patients per year (72% of increase); and for MCN, 60 of 72 patients per year (57% of increase), which occurred in both non-trial and trial centres. Most survey respondents noted an increase in apheresis activity during the trials and attributed it to a ‘jumping-the-gun’ phenomenon. Apheresis was not found to be useful for MS, so early spread did not improve health care |
‘one of the major problems of RCTs is the potential for
misunderstanding the uncertainty principle. . . The
problem of allowing patients to confuse experimental intervention
with “cutting-edge” therapy seems to be a generic
one. Physicians may be guilty of blurring the distinction because
they are advocates for an experimental treatment’ (p.
1354) The answer to the research question was relatively clear, but needs careful interpretation for this current review as shown by the MS case which highlights the dangers of adoption prior to trial results By-product/mostly negative broader impact and impact of specific (trial) research |
Das et al., 2008101 UK 2 |
Barrett's oesophagus To assess the variation in practice of Barrett's oesophagus management in comparison with accepted international guidelines before and after the introduction of a large Barrett's oesophagus RCT with protocols including those of tissue sampling |
A validated anonymised questionnaire was sent to 401 senior attending gastroenterologists asking for details of their current management of Barrett's oesophagus, especially histological sampling. Of the 228 respondents, 57 individuals (each from a different centre) were in the first group to enter the AspECT. Change in practice in these centres assessed | Positive Effect of the AspECT: clinicians in centres where the AspECT had started improved adherence to ACG guidelines compared with their previous practice (p < 0.05). BE patients were given 18.8% more biopsies compared with previous practice, and 37.7% of the patients were entered into the AspECT (p < 0.01) |
‘recruitment in this trial may also lead to improved quality
of endoscopic surveillance even for non-AspECT patients’ (p.
1085) The analysis shows improved health care in the trial centres for trial and non-trial patients, but no comparisons with other centres reported here By-product/impact of specific (trial) research |
du Bois et al., 2005102 Germany 1 |
Ovarian cancer To evaluate the pattern and quality of care for ovarian cancer in Germany and analyse prognostic factors with emphasis on characteristics of treating institutions, hospital volume, and participation in clinical trials |
Survey of all 1123 German gynaecology departments on patients with
histologically proven invasive epithelial ovarian cancer including
descriptive analysis of pattern of surgical care and systemic
treatment. Univariate and multivariate analysis of prognostic
factors. One third of all patients diagnosed in the third quarter of
2001 in Germany, 476 patients, were included Described in journal as being a review paper |
Positive Standard care according to German guidelines provided to only 35.5% of patients with early ovarian cancer. Multivariate analysis showed treatment in an institution not participating in co-operative studies to be associated with inferior survival: an 82% increase of risk (HR 1.82, 95% CI 1.27 to 2.61; p = 0.001). Hospital volume did not affect treatment outcome |
The quality of care in Germany was more comparable with
international EBP in areas where Germany has been more involved in
trials (i.e. advanced ovarian cancer). Authors suggest this supports
‘the hypothesis that study activity contributes to better
standard treatment’ (p. 189). NB. this strengthens the
general hypothesis, but weakens evidence for the impact coming
through generic mechanisms (i.e. better attitude to EBP) rather than
more study-specific ones? By-product/mostly broader, but some specific trial impact |
Hébert-Croteau et al., 1999103 Canada 2 |
Breast cancer This study was conducted to determine whether or not therapies for patients with breast cancer varied according to hospital caseload |
Women newly diagnosed between 1988 and 1994 with early stage node-negative primary breast cancer were randomly selected from the Quebec Tumour Registry and the Quebec hospital discharge database. The sample was stratified according to whether or not the hospital participated in multicentre clinical trials. Data were collected from medical charts, and only women having undergone dissection of the axilla were included in the analyses (n = 1259). Logistic regression analysis was used to adjust for case mix and organisational variables | Positive The focus of this study was on hospital caseload. In most of the aspects considered a higher proportion of the treatments received in hospital involved in clinical trials were consistent with guidelines and best practice than in hospitals not involved in clinical research |
‘our data suggest that large centres, especially those
actively involved in clinical research, rapidly adopt innovative
therapeutic modalities for the treatment of breast cancer and
support both the promotion of clinical research and the amalgamation
of health-care resources to ensure optimal care of women with breast
cancer’ (p. 954) Need to contrast with some other studies that found hospital size was not the key factor Study generally good quality, but research engagement was not the main focus of the analysis By-product/broader impact |
Janni et al., 2006104 Germany 2 |
Breast cancer To determine the extent to which treatment strategies and patient care are affected by participation in the ADEBAR study, a RCT |
A standardised questionnaire was sent to all 198 participating centres (98 responded). The questionnaire covered five areas of interest: previous inclusion of patients at the same tumour stage in other studies; the type of chemotherapy received by comparable patients previously outside the study; change in the intensity of medical care since participating in the ADEBAR study; the information gained through participation in the study; and changes in the overall quality of medical care | Positive 59.0% of the centres noted an increase in the intensity of patient care as a result of participation in the study, independent of the care provided purely because of the study. By being part of a research study, with a regular flow of information via newsletters, study meetings, etc., 80.0% noted an improvement in their professional knowledge in the field of breast cancer. 31.6% of the centres reported an improvement in the overall quality of their patient care since the start of the trial |
‘these results support the hypothesis that carrying out the
study has a positive effect on the current medical care of
participating patients, irrespective of the knowledge gained later
from the actual findings of the study’ (p.
3665) ‘carrying out the study makes a contribution to this flow of information [about carrying out the therapy]’ (p. 3666) The study was of centres engaged in the clinical trial, so we assume (but not explicit) that it refers to care for patients in general in the centres, not just for patients in the trial? By-product/impact of conduct of specific (trial) research |
Jha et al., 1996105 Canada 2 |
AMI To compare characteristics and outcomes of patients with AMI participating in two thrombolysis trials with those of non-trial patients at study hospitals and external hospitals |
Population-based data on hospital admissions and mortality from AMI for all hospitals in Ontario from 1989 to 1992 were linked to data on trial participants in two distinct thrombolysis studies (GUSTO I and LATE). In total, data for 1515 trial patients; 18,654 non-trial patients in the same hospitals; and 12,299 patients in external hospitals | Mixed/positive The main findings focus on better outcomes for participants in the trial than non-participants at study hospitals or non-study hospitals. This would not be relevant per se for our review but study goes on to show that, even after adjustment for factors such as age, gender, comorbidity and revascularisation rates, participants have a survival advantage over non-participants that is larger than expected from thrombolysis alone However, when comparisons are made between non-participants in trial hospitals and non-trial hospitals there is no difference for one trial, but a marginally significant reduction in chance of survival for the other trial |
Very complex to unravel the evidence relevant for our review, but
there might be an improvement in health care from engagement in
research beyond the specific intervention of the
trial By-product/some impact of conduct of specific (trial) research |
Jones et al., 2000106 USA 2 |
Appendectomy Have surgeons' habits been influenced by a previously published study conducted at the hospital |
Charts reviewed for 200 patients who underwent appendectomy from August 1998 to December 1998 in one hospital that had conducted a small (50 patients) appendectomy trial from April 1994 to December 1995. In addition, a formal survey was conducted of all staff surgeons to ascertain their procedure of choice for appendicitis, and the reasons for their preference | Negative in relation to application of findings from own
study Despite the hospital's published paper supporting OA over LA, 79% of the appendectomies in the hospital in the 6 months examined were attempted laparoscopically. The median operative time was longer for LA. Survey results showed that most staff surgeons prefer LA |
Assumed negative because of low take-up of trial findings in
hospital where trial conducted, but lack of control data means it is
difficult to tell if findings possibly made some impact compared
with elsewhere. Raises questions about whether or not health care
necessarily improved by application of hospital's own small
study By-product/negative for impact of specific (trial) research |
Karjalainen and Palva, 1989107 Finland 2 |
Leukaemia To determine whether patients with multiple myeloma treated in three consecutive clinical trials of chemotherapy by the Finnish Leukaemia Group during 1979–85 had a more favourable prognosis than either patients treated in the trial area before the trials, or those treated in the rest of Finland |
Comparison of the time trend in survival of patients living in the trial area (17 of the 21 main hospital districts serving more than two thirds of the patients with multiple myeloma) with that of patients living in the rest of Finland | Positive For cases diagnosed in 1979–85 the difference in the 5-year ‘cumulative relative survival rates between patients in the trial area and those in the reference area was 10%, those in the trial area doing better. Generalised proportional hazards regression analysis of the first 5 years of follow-up showed that the patients in the trial area had a survival advantage over those in the reference area’ (p. 1069) |
There are questions about how far this study fits our criteria, but
the authors' conclusions show the focus was on the protocol
of the trials not the regimens: ‘The patients in the trial
area benefited from the clinical trials, which suggests that the use
of a treatment protocol improves the end results of treatment. In
other words, the results favour a systematic treatment schedule in
preference to a schedule determined by the free choice of a
clinician’ (p. 1069) By-product/impact of specific (trial) protocols |
Kizer et al., 1999108 USA 2 |
AMI Aims were to (1) evaluate recent trends in selected treatments of AMI in relation to the publication of RCTs, statistical overviews, and task-force guidelines, and (2) compare prescribing practices in AMI management between physicians in routine clinical practice and physicians who design and/or implement RCTs |
Database review of the use of aspirin, β-blockers, ACE inhibitors, and calcium channel blockers on entry and at discharge in > 8000 patients enrolled in the MILIS, TIMI 1, 2, 4, 5, 6, and 9B trials with ST-elevation (and depression in MILIS) myocardial infarction for a period approaching 2 decades (August 1978 to September 1995) | Positive ‘For all medications under study, increases and decreases in use associated with publication of clinical data occurred earlier and more steeply for the discharge cohort (prescriptions by physicians participating in RCTs) than for the enrolment cohort (prescriptions by physicians in routine practice)’ (p. 79). Furthermore, subsequent prescribing practices among RCT investigators and their colleagues have higher concordance with published findings than those of physicians in routine practice |
How far the application of the drugs is outside of application in
trials of those specific drugs is complicated, but it does seem that
it probably is: ‘We hypothesised that physicians who
participate in RCTs both by their prescribing behavior and their
incorporation of drug recommendations in RCT design, apply findings
in the literature more promptly and thoroughly than physicians in
routine clinical practice’ (p.
81) By-product/(mostly) broader impact |
Majumdar et al., 2002109 Canada/USA 1 |
Myocardial infarction Tested whether or not physicians and hospitals that take part in specific clinical trials are more likely to adopt their findings |
Cross-sectional analysis of data of patients enrolled in GUSTO-1 trial, comparing adoption of ACE inhibitors among GUSTO patients at SAVE trial sites with adoption among GUSTO patients at non-SAVE sites. (SAVE was an earlier trial showing the benefits of ACE inhibitors.) Included 25,886 patients at 659 hospitals. Also compared the use of calcium channel blockers in hospitals that had taken part in a trial with null results with those that had not | Negative No difference in the use of the findings of the SAVE trial between participating hospitals and other hospitals. Similarly no difference in use of the null results between hospitals participating in that trial and other hospitals |
Authors suggest two possible explanations: investigators
‘unconvinced by findings from a single trial’;
‘a number of “non-scientific” factors (e.g.
local practice style, habit or inertia, expert opinion, patient
demand/expectations, commercial detailing, or other marketing
strategies) have greater influence on prescribing
. . . even among the physicians who work at the
hospitals that took part in generating the evidence’ (p.
144) Comes to a different conclusion to Majumdar et al. (2008)110 By-product/negative for impact of specific (trial) research |
Majumdar et al., 2008110 USA 1 |
Unstable angina Explored whether or not the hospital characteristics essential to the successful conduct of hospital-based clinical trials (physician leadership, shared goals, infrastructure support) also lead to better clinical processes (adherence to guidelines) and better patient outcomes (lower mortality) |
Cross-sectional analysis of data of patients in hospitals that participated in trials compared with patients in hospitals that did not. 174,062 patients in 494 hospitals. Used multivariate linear regression models to adjust for all available hospital characteristics | Positive Hospitals that participated in clinical trials had better adherence to guidelines and lower rates of mortality even after adjusting for a range of factors such as size, caseload, teaching status |
Authors do not explicitly revisit the factors they hypothesised
might be linked to better processes and outcomes, but do point out
that even after allowing for some major institutional
characteristics they ‘still demonstrated an independent
association between hospital trial participation and
outcomes’ (p. 662). Also say du Bois et
al.102
the only previous study that ‘looked carefully for evidence
of a trial participation effect’ (p. 660) Comes to a different conclusion to Majumdar et al. (2002),109 but examining broader impact rather than impact of specific study on which the organisations had worked By-product/broader impact |
Meineche-Schmidt et al., 2006111 Denmark 1 |
Gastro-oesophageal reflux disease in general practice To compare management strategies in daily clinical practice between GPs who had and had not participated in a trial |
Cross-sectional analysis comparing: (a) disease management among GPs
who participated in the ONE trial, and those who did not, (b)
symptom presentation and patient satisfaction between patients who
participated in the ONE trial, and those who did not Included 64 ONE GPs, 58 other GPs, and 1206 patients |
Positive Participation in the trial influenced doctors and patients: treatment modalities introduced in the trial were used subsequently by GPs; patients who had participated consulted less for symptoms and used more medication, compared with patients who did not |
‘Experienced GPs import treatment modalities from the trial
into the daily clinic’ (p. 1124) Patients participating in trial had lower symptoms than those who did not – possibly because they were more ‘experienced’ patients. It is not clear therefore whether effects found were due to trial or to differing trial populations See also Das et al.101 in this same clinical field By-product/impact of specific (trial) research |
Morton et al., 2006112 Australia 2 |
Sexual health Investigated whether or not clinician involvement in research leads to improvement in clinical practice within a sexual health service |
Retrospective case note review of 100 cases during three time periods comparing diagnosis and management practices of clinicians who were high and low recruiters in a cross-sectional study | Mixed/positive Introduction of research was temporarily associated with improved clinical practice in high-recruiting clinicians only |
‘Our study demonstrates that involvement in clinical
research can lead to improvements in clinical practice in those
clinicians who actively recruit for the study. These findings
support both the colocation of research and clinical centres, and
clinicians being actively involved in these studies’ (p.
185). No other information provided about the characteristics of the
practitioners in the two groups or about the patients in either
group By-product/some impact of specific (trial) research |
Pancorbo-Hidalgo et al., 2006113 Spain 1 (?) |
Pressure ulcer care To determine: (a) Spanish nurses' level of knowledge of existing guidelines for pressure ulcer prevention and treatment, (b) frequency of implementation of this knowledge in clinical practice, and (c) professional and educational factors that influence knowledge and practice |
A survey that targeted a cluster randomised sample of nurses working in hospitals, primary health care centres and elder care centres. 2006 nurses surveyed, 740 responses (36.9%) | Positive Taking part in research projects did not improve the level of knowledge of existing guidelines, but did improve self-reported knowledge implementation |
Taking part in pressure ulcer research significantly improved
practical implementation of knowledge, ‘so that the clinical
practice of nurses who have participated in pressure ulcer-related
research projects is closer to the recommendations of clinical
guidelines. We think that this is because these professionals have
more motivation and a more favourable attitude towards the
implementation of research’ (pp.
335–6) By-product/broader impact |
Pons et al., 2010114 Spain 1 |
Heart failure and AMI Tested whether or not a hospital's relative performance on health-care quality indicators is related to its relative performance in some measures of biomedical research |
Cross-sectional analysis of secondary data of in-hospital risk-adjusted mortality for CHF and AMI and several bibliometric measures of publications in cardiovascular disease in 50 hospitals | Positive A statistically significant low-to-moderate negative correlation between in-hospital mortality for AMI and CHF and weighted citations ratios of research output in Spanish hospitals. However, teaching status had a stronger correlation with hospital mortality and there was no correlation between mortality and number of research publications |
‘As well as not being able to identify the mediating
variables between health care quality and research quality, we
cannot identify the direction of causality which may be
uni-directional or in both directions simultaneously’ (p.
208) Important conclusion: ‘An adjusted bibliometric measure of research production could plausibly be incorporated as a complementary indicator in the comparative evaluation of quality between Spanish hospitals’ (p. 208) By-product/broader impact but correlation is with a measure of research quality not level of engagement |
Rich et al., 2011115 UK 1 |
Small-cell lung cancer To determine how features of patients and hospitals influence access to chemotherapy and survival for people with small-cell lung cancer in England |
Statistical analyses of National Lung Cancer Audit Data, Hospital Episode Statistics and National Cancer Research Network data detailing the number of patients entered into lung cancer trials at each NHS trust. 7845 patients | Mixed/positive Trusts with an interest in recruiting people into lung cancer clinical trials in general are more likely to give chemotherapy to people with small-cell lung cancer (66% received chemotherapy in high-trial hospitals compared with 59% at those with low trial participation), and chemotherapy was associated with improved survival But also says: ‘Whether the NHS Trust where a patient was first seen was a centre with a high trial participation rate or not did not affect overall survival’ (p. 749) |
Main conclusion: ‘Patients first seen at a hospital with a
keen interest in clinical trials are more likely to receive
chemotherapy’ (p. 746); however, also say ‘The main
determinants of Trust level variation are not known’ (p.
752) Note: the increased use of chemotherapy in high-trial centres was not associated with an increase in chemotherapy-related deaths (suggests the high-trial centres were not overtreating people) By-product/some broader impact |
Rochon and du Bois, 2011116 Germany 1 |
Ovarian cancer Evaluation of the relation between the outcome for newly diagnosed ovarian patients and their hospital's participation in clinical trials over 3 years |
Statistical analyses of patient data and data about hospital volume and participation in clinical trials. All 1123 German gynaecology departments approached. Analysis based on 476 patients from 165 hospitals, of which 50% were study hospitals | Positive Patients treated in study hospitals had a higher chance of receiving treatment according to national guidelines ‘In our study, participation in clinical trials as a quality characteristic of hospitals was associated with improved survival in patients with epithelial ovarian cancer’ (p. vii17/18) |
The authors suggest that their model implied the hypothetical
assumption that patients who were in study hospitals have a better
outcome than patients in non-study hospitals ‘because
hospitals participating in trials accumulate knowledge and develop
special infrastructures associated with study
participation . . . study
participation of a center but not an individual patient could add to
a superior outcome based on receiving better quality of
care’. They went on to claim that the hypothesis was
supported by their data indicating ‘that the observed
benefit was not limited to patients enrolled in active
protocols’ (p. vii18/19) By-product/broader impact |
Salbach et al., 2010117 Canada 2 |
Stroke To identify determinants of research use in clinical decision-making among physical therapists providing health-care services to people with stroke |
Cross-sectional mail survey and statistical analyses. 1155 physical therapists surveyed, final sample of 270 eligible respondents (23%). Ordinal regression used to identify factors associated with research use | Positive Research participation is significantly associated with research use in clinical decision-making Therapists who reported participating in research were 4.3 times more likely than those who did not participate in research to use research literature ≥ 6 vs. 0–1 times a month |
‘It is reasonable that research participation, positive
attitudes, and self-efficacy to implement EBP are inter-related and
important factors that facilitate the incorporation of research
findings into clinical decision making’ (p.
8) Weaknesses of this study:
|
Network papers | ||||
Abraham et al., 2010118 USA 1 |
Addiction: alcohol use disorders Does research involvement through the National Institute on Drug Abuse's CTN enhance innovativeness beyond the specific interventions that are tested? |
Panel longitudinal design: repeat interviews sought with 127 treatment units in the CTN (92% of eligible) and random sample of 362 non-CTN centres. Multivariate logistic regression analysis | Positive CTN units 3.2 times more likely to have adopted acamprosate for at least one patient. The ‘differences by CTN participation were significant even in models that included a wide range of organizational characteristics’ (p. 281) |
‘These findings support the idea that exposure to the
process of implementing innovation treatment technologies over time
may influence concurrent and future decisions to adopt EBPs’
(p. 281) From same team and some overlapping issues as Knudsen et al.121 and Ducharme et al.120 Considered absorptive capacity Network/broader impact |
Carpenter et al., 2011119 USA 1 |
Breast cancer Are organisational affiliations relating to teaching and research associated with accelerated adoptions of innovative breast cancer treatments? |
Surveillance, Epidemiology, and End Results-Medicare data were used to examine the diffusion of SLNB for treatment of early stage breast cancer among 3874 women aged ≥ 65 years and diagnosed between 2000 and 2002, shortly after Medicare approved and began reimbursing for the procedure. Bivariate and multivariate analysis | Positive Patients treated at an organisation affiliated with a research network – the ACOSOG or other NCI co-operative groups – were more likely to receive the innovative treatment (SLNB) than patients treated at unaffiliated organisations (OR 2.70, 95% CI 1.77 to 4.12; OR 1.84, 95% CI 1.26 to 2.69, respectively). ‘Neither hospital teaching status nor surgical volume was significantly associated with differences in SLNB use’ (p. 172) |
Clear results for impact of research networks, but
‘Unexpectedly, this study also found that organizational
factors previously associated with cancer care quality and outcomes
– teaching affiliation and surgical volume – did not
contribute independently to the early adoption of innovative breast
cancer treatments’ (p. 177) ‘These findings reinforce diffusion theory, which specifies S-shaped diffusion curves that differ between early adopters and later adopters’ (p. 176) In discussion state that some surgeons might have had direct experience of SLNB through a trial: this reduces extent to which it is clearly a study of broader impact Network/broader impact (mainly) |
Ducharme et al., 2007120 USA 1 |
Substance abuse What is the effect of organisational participation in the CTN on the subsequent adoption of EBPs? Three goals: ‘(1) to examine the extent to which exposure to innovative treatment techniques influences organizational adoption of those techniques; (2) to examine the impact of the CTN on the diffusion of these two treatment approaches; and (3) to examine whether similar predictor variables explain the adoption of behavioural and pharmacological innovations’ (p. 323) |
Panel longitudinal design: repeat interviews | Mixed/partly positive for impact of
trial ‘Net of a host of organizational variables and controlling for time, treatment programs having direct involvement in a CTN buprenorphine trial were five times more likely to have adopted the medication 6 months after the baseline interview’ (p. 327). However, neither prior research involvement in general, nor membership of the network, but not in the specific trial, made an impact. Some differences between the hurdles needed to cross to implement the respective drug and behavioural treatments might explain the differences |
Various possible reasons for impact from involvement in the
buprenorphine trial but lack of impact from involvement in trial of
behavioural treatments include a time scale issue: buprenorphine
(Subutex® and Suboxone®,
Reckitt Benckiser, UK) was newer than the voucher-based incentives
technique which was ‘comparatively farther along the
“S-shaped curve” that characterises the diffusion of
innovations (Rogers, 1995). It may be that different models are
needed to understand the adoption of innovations in the earlier
stages versus the later stages of diffusion’ (p.
328) See also Knudsen et al.121 and Abraham et al.118 Network/partly positive for specific trial impact but negative for broader impact |
Knusden et al., 2009121 USA 1 |
Substance abuse To examine the adoption of buprenorphine over a 2-year period in the National Drug Abuse Treatment CTN CTPs. In particular to compare adoption in CTPs that were involved in the trial and had the buprenorphine protocol with those CTPs without buprenorphine-specific protocol experience |
Panel longitudinal design: repeat interviews 92% of eligible CTN centres (n = 241) took part in the baseline interviews, and 94% of those that were eligible for the 24-month follow-up participated (n = 220). Multivariate logistic regression models |
Positive for trial involvement Involvement in a buprenorphine protocol continued to be a strong predictor of adoption at the 2-year follow-up: after controlling for the other organisational characteristics included, CTPs with experience in a buprenorphine protocol were 6.1 times more likely to have adopted it at 24 months than CTPs that had not [p < 0.001 (two-tailed test)] |
Claim findings fit with Rogers'52 argument about the trialability of
innovations, in terms of whether or not organisations can gain
experience with an innovation on a limited basis. ‘Clinical
trials offer such experience, where an organization does not have to
commit to long-term adoption but can try an innovation for a limited
period with a specified number of clients’ (p.
311) Important 24-month follow-up study of part of baseline position described in Ducharme et al.120 Network/positive for impact of specific (trial) research |
Laliberte et al., 2005122 USA 1 |
Breast cancer Explored the relationship of a treating facility's membership of one of three cancer research networks funded by the NCI to the provision of appropriate primary treatment to older women with early stage cancer |
Three-level hierarchical regression model Analysis of database linking SEER registry data and Medicare claims in patients aged ≥ 65 years with early stage breast cancer to data on treating facility – which included membership in cancer research networks |
Positive Patients treated in facilities with multiple membership of research networks were more likely than patients treated in non-member facilities to receive MAST or BCS plus RT than BCS alone: increased odds by 42% and 60%, respectively |
‘Organizational factors may influence compliance with
treatment guidelines and be useful in improving the quality of
care’ (p. 471) Paper uses a very specific definition of a research network that encouraged research uptake as well as research participation (may not apply to all research networks) Network/broader impact (mainly) |
Rhyne et al., 2011123 USA 2 |
Acanthosis nigricans and diabetes risk To learn more about the short- and long-term effects on clinician and/or patient behaviour from participation in a PBRN study |
Mixed-method design Two clinician surveys (first survey: 73 clinicians out of 86 in study; second follow-up survey: 72 clinicians out of 96 in study) followed by in-depth interviews after first survey with 16 clinicians Comparison of findings from a short-term follow-up survey of clinicians in one study with findings from a long-term follow-up of clinicians in a second study |
Positive A marked and sustained degree of change in the primary care encounter as a result of participation in PBRN studies |
‘Several factors could have contributed to the reported
changes; (1) focus of the PBRN study on an area of concern for
clinicians; (2) interactive clinician education
. . . (3) rapidly diagnosed condition; (4)
“teachable moment”; (5) increased clinician
self-efficacy’. Authors then go on to note several
components of PBRN ‘while not exclusive to PBRNs, are
accomplished more effectively and efficiently through the
longitudinal structure of a PBRN’ (p. 501) Study describes a self-reported behaviour change. Given potential bias in study authors comment that: ‘direct observation of primary care encounters would be an important next step’ (p. 501) Network/specific study impact |
Siegel et al., 2006124 USA 2 |
Otitis media To explore the effects of a PBRN study on SNAP on patients and practitioners |
Survey comparing antibiotic use among PBRN-SNAP study participants (178 families and 17 practitioners in 11 practices) and controls (30 randomly selected community practitioners – 18/30 respondents), comparing antibiotic use before the SNAP study and 1 year after. Case note study | Positive Both groups showed a decline in antibiotic use but it was only statistically significant in the PBRN group |
‘These results suggest that implementing a study in a PBRN
had a positive and lasting effect on the study practitioners. This
effect also shows that PBRNs can be successful in translating
research into practice’ (p.
523) Network/specific study impact |
Warnecke et al., 1995125 USA 2 |
Breast cancer To explore the effect of the CCOP on the clinical practice of participating clinicians |
A reanalysis of data from two previous evaluations of the CCOP in
which physicians participate as equals in the research
process Evaluation 1 – patient records from 90 hospitals including 66 affiliated to CCOP Evaluation 2 – patient records from 64 hospitals, included 56 affiliated to CCOP 11,450 cases in total |
Positive Physicians who directly participate in clinical trials were more likely to adopt the new treatment, adopted it at a faster rate, and were less likely to abandon the state of the art treatment after initial adoption |
The authors suggest that the CCOP provides an important model for
understanding how to change professional behaviour. It illustrates,
they claim, what can be accomplished when it is in the mutual
interest of the physician, the agency sponsoring the research, and
the treatment facility to participate in a strategy aimed at
innovation and dissemination. In terms of effectiveness, the CCOP
model in which community physicians participate directly in clinical
research ‘probably works best when the science is changing
rapidly and when keeping current with these changes is
critical’ (p. 338). The theory by which the treatment is
justified is, the authors claim, ‘intelligible, thereby
reducing uncertainty about the new treatment, because participating
physicians attend scientific meetings where the rationale is
discussed as protocols are developed’ (p.
339) Network/specific trial impact |
Intervention papers | ||||
Chaney et al., 2011126 USA 2 |
Depression To determine whether or not health-care organisations could improve depression care quality using an EBQI to adapt a CCM to local and organisational context |
A cluster randomised trial of EBQI as applied to a CCM for
depression
|
Mixed/positive Patients in intervention practices (EBQI-CCM sites) were more likely to receive appropriate antidepressant care (66% vs. 43%; p = 0.01). Increased antidepressant use, however, did not translate into robust improvements in depression symptoms, functional status, or satisfaction with care in intent-to-treat analyses |
Prior studies of CQI or lower-intensity EBQI for depression in
primary care have not shown improved prescribing The results emphasise the importance of active primary care clinician engagement in CCM Intervention/impact of specific implementation research around which the intervention focused |
Goldberg et al., 2002127 USA 2 |
Diabetes To assess whether or not delivery systems can be designed to conduct CQI as part of clinical care provision |
A large family medical centre was divided into two parallel group practices (called firms). These frontline structures were supported to serially adapt and evaluate selected CQI interventions by first introducing process changes on one firm but not the other and comparing the groups. As all the required longitudinal data were contained in a computerised repository, it was possible to conduct these controlled ‘firm trials’ in a matter of months at low cost. However, the study also continued monitoring events for 5 years to evaluate long term | Mixed/negative ‘During a 3-year period, implementation of point-of-service reminders and a pharmacist outreach programme increased recommended glycohaemoglobin (HbA1c) testing by 50% (p = 0.02) and reduced the number of diabetic patients inadequately controlled by 43% (p < 0.01). Following this outcome improvement, patients exhibited a 16% reduction in ambulatory visit rates (p = 0.04)’ (p. 156). The observed outcome improvement was reversed during the subsequent 2 years, because staffing austerities forced by unrelated declines in clinic revenue caused the withdrawal of trial interventions |
‘The processes and outcomes of diabetes care were improved
. . . Health care systems can, by
conducting serial firm trials, become learning organizations. CQI
programs of all kinds will likely never flourish, however, until
quality improvement and reimbursement mechanisms have become better
aligned’ (p. 156) Authors also suggest that systems change is superior to educational interventions Intervention/negative for intervention specific research in longer term |
Hall et al., 2010128 USA 2 |
Rehabilitation for veterans with war injuries To describe practice changes associated with the FCC intervention |
Cross-site, mixed-method evaluation of the implementation of a family care QI collaborative for the VA rehabilitation for veterans with moderate-to-severe war injuries through four regional PRCs. In each collaborative health service researchers partnered program leaders and rehabilitation specialists | Positive ‘Family centred practices and satisfaction improved at sites with lower baseline scores (p < 0.05) and was equivalent across sites after the pilot. Providers initiated specific family centred practices that often began at one site and spread to the others through the collaborative. Sites standardised family education and collaboration’ (p. S18). Scores on the measure predicting successful implementation of the intervention beyond the pilot were promising |
‘Providers believed that the collaborative produced a
“culture change” from patient-centered to
family-centered care and viewed program leadership and health
services researchers' involvement as crucial for
success’ (p. S18) There also seemed to be
‘indicators of the sustainability of these practice changes
and commitment to continued spread of family-centered care for
families of the war injured’ (p. S24) This study was part of the VA's QUERI Intervention/specific intervention research into a new approach |
Puoane et al., 2004129 South Africa 2 |
Malnourishment in children To improve the clinical management of malnourished children in rural hospitals in South Africa |
Participatory research A pre- and post-descriptive study in three stages:
|
Mixed/positive In the 12 months after implementation case fatality fell by approximately 25% However, study also found that the rate of weight gain in the ‘catch-up’ phase was poor, and could only be calculated for 23% of the children because of incomplete records ‘Process scaled up to a further 23 hospitals’ |
‘Participatory research led to the formation of a hospital
nutrition team, which identified shortcomings in the clinical
management of severely malnourished children and took action to
improve quality of care’ (p. 31) Results mixed. Message about importance of involving all the local nutrition team, and also the wider hospital, provincial and district personnel Intervention/some impact of specific research around which the intervention focused |
Table 2 supplements the account given below, which is divided into three sections based on the three categories of the degree of intentionality taken from one of the two main dimensions in the matrix outlined in Table 1 in Chapter 2: by-product, research network and intervention. In what follows, we collate and explore the collective data about the five dimensions against which we analysed the papers. These are the four dimensions examined in Table 2, plus the level of research engagement (i.e. individual clinician or organisational). The final section of this chapter provides a brief overview of our findings from all the focused review papers.
By-product papers
We included a total of 21 papers in the by-product group. 97–117 In this set of papers, the main purpose of the research engagement of clinicians or organisations was to conduct, or participate in, one or more studies of the efficacy or effectiveness of some new procedure or therapy. The papers we examined were separate studies, usually conducted later, that explored the impact on health care that had arisen as ‘by-products’ of the research engagement in the original study.
The 21 papers in this group97–117 included 12 from Clarke and Loudon,97–102,104,105,108–111 and a further nine papers consisting of four published since the Clarke and Loudon search,114–117 and others that were the result of our wider inclusion criteria. Some of the additional papers fell into both categories. For instance, we included papers describing the findings from two surveys of health-care staff that used regression analysis to establish a correlation between research participation and improved practice by those staff, one of which was published prior to Clarke and Loudon's search113 and one after. 117
The papers are from a small number of developed countries: USA, 4;99,106,108,110 Canada, 4;100,103,105,117 North America (Canada/USA), 1;109 Germany, 3;102,104,116 UK, 3;97,101,115 Denmark, 2;98,111 Spain, 2;113,114 Finland, 1;107 Australia, 1. 112
Of the 21 papers97–117 in the by-product category described above, 1797,99,101–105,107,108,110–117 produced positive findings about the link between research engagement and improved performance. We classified seven of the positive studies as important to our review. 102,110,111,113,114–116 Four papers98,100,106,109 produced negative, or mixed/negative, findings and we classified two of these as important. 98,109
Most of the 17 papers97,99,101–105,107,108,110–117 reporting positive findings described measurable improvements in the processes of care thought to lead to better outcomes, rather than better health outcomes per se. Just six studies reported better health outcomes, three of which had been included in Clarke and Loudon's review (du Bois et al. ,102 Jha et al. 105 and Majumdar et al. 110) and three further studies (Karjalainen and Palva,107 Pons et al. 114 and Rochon and du Bois116).
The specific/broader categories
Some of the studies listed in Table 2 show aspects of both broader and specific impact, but it was usually clear which was most important in any particular study. We classified the impact in 1099,102,103,108,110,113–117 of the 17 positive by-product studies as being either entirely, or mainly, in our broader category. We also classified six of these studies as being important studies. 102,110,113–116 The first two102,110 of these are the examples that were cited in the NHS Confederation briefing3 (described in Chapter 1) that claimed research engagement does improve health-care performance, and both examples show improved health-care outcomes not just improved processes. These two102,110 papers show that patients with a particular disease – ovarian cancer in Germany, and unstable angina in the USA, respectively – have better health-care outcomes (in terms of survival) if treated in a hospital with a research active team or service. We classified the impact of the remaining seven97,101,104,105,107,111,112 of the 17 positive by-product studies as being specific, but we classified only one of these as important. 111
Comment
It is worth considering what the four negative studies98,100,106,109 in this group tell us. These were studies in which the hypothesis that research engagement improves health-care performance was not confirmed.
In some studies a negative finding might reflect a limited scope for research engagement to make any impact. For example, in the study reported by Andersen et al. 98 adoption rates of a specific drug were already > 66% before the trial, and therefore they suggested that their negative findings should be interpreted with caution because the research engagement did not have the scope to operate as it might in other circumstances.
As noted, if we identified a study as negative, this meant it did not support the hypothesis that research engagement improves health-care performance. It did not necessarily mean the health care worsened. However, several studies, while not themselves demonstrating that health care worsened, highlighted potential reservations about, and even dangers of, research engagement. Clark et al. 100 showed that sometimes clinicians, including some of those conducting research, can ‘jump the gun’ and start applying a theoretically attractive therapy before a trial has been completed. In this case, two of the three trials showed the new therapy had a positive effect, and in the third trial it made no difference; therefore, here there was no worsening of health care. Jones et al. 106 claimed that the findings of the trial conducted in their own hospital were not making an impact on practice in the hospital, and we therefore classified it as a negative study. However, such examples raise the issue that, in some cases, involvement in trials might lead to researchers adopting the findings of their own small trial even when the findings were not necessarily the same as those from other studies. Similar pitfalls have been much discussed in the literature; for example by Grimshaw et al. 130 We discuss these important concerns further in the analysis of the limitations of our review in Chapter 7.
Network papers
We identified eight papers in the network group. 118–125 All eight come from the USA with three relating to various breast cancer networks,119,122,125 three relating to the Clinical Trials Network (CTN) of the National Institute on Drug Abuse,118,120,121 and two describing aspects of PBRNs. 123,124
Seven118,119,121–125 of these eight papers were positive about research engagement leading to health-care improvement, at least in terms of the processes of care. We classified four118,119,121,122 of these studies as important, as we did one study120 that reported more mixed findings, but with some positive claims about research engagement leading to improvements in health-care performance.
The five studies that we regarded as important118–122 all focused on the service/organisational level. The three we regarded as less important123–125 focused on the individual clinician level, including both studies from the PBRNs.
Three of the studies described broader improved health-care performance resulting from engagement in a research network;118,119,122 we classified all three as important, the first was in substance abuse and the other two in the breast cancer field. Five other studies120,121,123–125 described a specific impact in that those in the network who had been involved in a trial applied the findings of that study to a greater extent than other clinicians and centres that had not been involved in the trial. Of these five studies, we classified two as important120,121 and three as less important. 123–125
The specific/broader categories
The paper by Ducharme et al. 120 also highlights the relevance of attempting to apply the specific/broader categorisation, so as to analyse the issues arising. The study considered two different therapies for substance abuse, a pharmacological innovation (buprenorphine) and a behavioural innovation, and three different aspects of research engagement: prior involvement in trials in general; membership of the research network (called CTN); and prior involvement in the network's trial of one of the two interventions. In this study, any impact from prior involvement in trials in general and network membership would count as broad impact, and participation in the single trial to a more specific impact. Overall, the study found no evidence to support the claim that mere membership of the research network alone led to improvements in health-care performance, but those centres in the network that had been involved in a trial of buprenorphine were five times more likely than other centres to have adopted the medication 6 months after a baseline interview, and this is what accounted for the difference in adoption rate between CTN centres and non-CTN centres. Therefore, this study is inconclusive about there being a broader impact, but does provide a strong example of a specific-trial impact in the case of buprenorphine, but not in the case of the behavioural intervention.
The study by Knudsen et al. 121 was a partial follow-up of the Ducharme et al. study,120 conducted 24 months later. It focused on the pharmacological intervention and the comparison was restricted to centres in the CTN, but it did confirm the positive finding of the earlier study about the specific impact of trial involvement.
Comments
Nature of the positive papers
The overwhelmingly positive nature of this set of papers was related to recorded improvements in the processes of care; none of the studies identified improvements in health outcomes.
Provenance of the papers
All the papers in this group were from the USA. This apparent bias is, we believe, due to the more substantial history of research networks in the USA. This picture is likely to change as research networks are now becoming a more important feature in countries such as the UK, and one of several non-US papers in the wider review is an abstract describing a recently conducted UK study that was presented in November 2012 while this report was under review.
Intervention papers
Four papers make up the intervention group,126–129 which contains papers that describe the impact of initiatives that consist of interventions aimed at improving health care by involving health-care staff in specific research studies. Three126–128 come from the USA and one129 from South Africa. Two describe the VA's QI initiatives that involved research engagement interventions aimed at improving health-care performance. 126,128 The remaining two studies describe interventions that were not part of a major initiative: Goldberg et al. 127 relates to research associated with a continuous quality improvement (CQI) effort in a large family medical centre, and Puoane et al. 129 describe a PAR initiative in two rural South African hospitals.
One128 of these four studies is positive and two126,129 mixed/positive as, although they have some positive elements, they are limited in scope. Finally, we classified one study127 as mixed/negative because the improvements in health-care performance were examined over a longer period than the intervention project, but were soon reversed. Most of the improvement described in the studies was in health-care processes, but some small improvements in health outcomes were reported in a couple of the studies; however, one of these127 was the study where the improved outcomes were soon reversed when the trial intervention was withdrawn, and so it was counted as a mixed/negative study and was not included in the list of positive studies that had resulted in improved outcomes.
The specific/broader categories
In all four cases the improvements that did occur related to the use (or increased use) of whatever approach or therapy was developed in the specific intervention. Some examples of how the broader impact category might also be important at the intervention level are discussed in the next chapters.
Comment
Even among this small number of studies there was considerable diversity in terms of scope and outcomes. As discussed further in the next chapter, it was particularly difficult to define clear inclusion criteria to enable us to distinguish these papers from the many others describing collaborative research approaches, QI research initiatives, participatory and action research, and research initiatives at an organisational level. As we did throughout the focused review, we attempted to identify studies that did have some measure of research engagement and made some attempt to measure the improved health-care processes and/or outcomes beyond the lifetime of the specific research. Many other similar studies that were identified during the searches were excluded because they did not fully meet our inclusion criteria but were available for us to draw on in the wider review.
Overall comments
Bringing the previous sections together, we have 33 papers,97–129 of which 2897,99,101–105,107,108,110–126,128,129 were positive (with six105,112,115,120,126,129 of those being only mixed/positive) in relation to the key question: does research engagement improve health-care performance? Of these, 12102,110,111,113–116,118–122 were of higher importance and 1697,99,101,103–105. 107,108,112,117,123–126,128,129 of less importance. Five studies98,100,106,109,127 were negative (of which two100,127 were mixed/negative). Of these, two98,109 were of higher importance and three100,106,127 of less importance. Four102,110,114,116 of the important studies and three105,107,129 of the less important ones reported some improvement in terms of health outcomes, whereas the rest reported aspects of improved processes of care.
Twenty-two studies97–99,102–105,107,109,110,114–116,118–122,126–129 were at the institutional level and 11100,101,106,108,111–113,117,123–125 at the clinician level. Of the 28 positive and mixed/positive papers, 1997,99,102–105,107,110,114–116,118–122,126,128,129 were institutional and nine101,108,111–113,117,123–125 clinician level. Of the five98,100,106,109,127 negative and mixed/negative papers, three98,109,127 were institutional and two100,106 clinician level.
Of the 2897,99,101–105,107,108,110–126,128,129 positive or mixed/positive studies, 1399,102,103,108,110,113–119,122 describe broader impact and 1597,101,104,105,107,111,112,120,121,123–126,128,129 describe specific impact. Within this overall balance, there was a contrast between 1099,102,103,108,110,113–117 out of 1797,99,101–105,107,108,110–117 of the by-product studies showing broader impact (as would fit with the absorptive capacity model), and all the intervention studies showing specific impact (as would fit a key element of the collaborative and action research theories). (We discuss these issues further in the next chapter where we also draw on the papers from the wider review.)
Chapter 5 Analysing the relationships between research engagement and performance: exploring possible mechanisms and drawing on the wider review
During the literature search we took 473 papers to full-paper review. Only 33 met our full inclusion criteria for the focused review, but many more contained some material that, we thought, might contribute to the request in the ITT for a ‘theoretically and empirically grounded assessment of the relationships, if any, between research engagement and performance’.
Our analysis of this wider body of evidence serves two overlapping purposes. First, it provides supplementary information and understanding in relation to the first question of whether or not research engagement improves health-care performance. Thus, although we decided that our main focus should be on studies that measured progress across the whole pathway from the research engagement to the improved health care, we also recognised that important contributions might be made by initiatives and studies that involved only some of the steps along the pathway. Second, this wider analysis allows us to consider a broader range of material related to the plausible mechanisms through which research engagement might lead to improved health-care performance.
In this chapter we use the matrix presented in Chapter 2 to build on the evidence presented in Chapter 4. We present our analysis according to the three categories of intentionality used there (by-products, research networks and interventions) and in each section use the second dimension in the matrix (a broader or more specific impact on health care) to explore what the papers studied say about how that impact was produced. In each section we first discuss insights from the focused review papers. We then expand the discussion to take account of evidence from the wider review and, where relevant, to the theoretical approaches described in Chapter 2.
Mechanisms and further evidence linked to insights from the by-product papers
Mechanisms from the by-product papers in the focused review
The 21 ‘by-product’ papers97–117 in Table 2 are the largest group in the focused review. In these papers the main purpose of the original research engagement (by clinicians or institutions) was to conduct, or participate in research studies to evaluate new therapies, procedures, etc. The by-product papers we examined were separate studies, usually conducted later, that explored the impact on health care that had arisen as ‘by-products’ of the research engagement in the original study. Nevertheless, the authors of these studies examining the impact of the research engagement have sometimes analysed, or speculated about, what might have caused this impact. In Chapter 2 we suggest that different mechanisms may be at play when research engagement has a broad impact (the use of research findings from wherever they come) as opposed to a more specific impact (use of research findings from a specific study in which the research engagement occurs). The positive ‘by-product’ papers divide almost equally between those reporting a broad impact (10 papers99,102,103,108,110,113–117) and those reporting a more specific impact (seven papers97,101,104,105,107,111,112).
Studies reporting a broad impact
At an individual level, an important mechanism is the change that research engagement can promote in attitudes and behaviour. Several of our studies surveyed clinicians. For example, Pancorbo-Hidalgo et al. 113 demonstrate that the clinical practice of nurses who participated in pressure ulcer-related research projects was closer to the recommendations of clinical guidelines than those who had not and state: ‘We think that this is because these professionals have more motivation and a more favourable attitude towards the implementation of research results’ (p. 336). In addition, Salbach et al. 117 show that therapists who reported that they had participated in research were 4.3 times more likely than those who did not to use the research literature regularly, and claim: ‘It is reasonable that research participation, positive attitudes, and self-efficacy to implement EBP are inter-related and important factors that facilitate the incorporation of research findings into clinical decision making’ (p. 8).
Kizer et al. 108 also focus on clinicians, but concentrate more on the processes of trials as being important. Kizer et al. 108 hypothesise that ‘physicians who participate in RCTs both by their prescribing behavior and their incorporation of drug recommendations in RCT design, apply findings in the literature more promptly and thoroughly than physicians in routine clinical practice’ (p. 81).
At an institutional level, an additional mechanism is the infrastructure that may be created to support trials. This might be used more widely, or for a longer period to improve patient care. Thus Rochon and du Bois116 claim that ‘patients in study hospitals have a better outcome than patients in non-study hospitals because hospitals participating in trials accumulate knowledge and develop special infrastructures associated with study participation’ (p. vii18). Majumdar et al. 110 explore whether or not hospital characteristics essential to hospital-based clinical trials also led to better clinical processes in the form of adherence to guidelines and better patient outcomes, and suggest that the key characteristics are physician leadership, shared goals and infrastructure support. These authors do not explicitly revisit these factors in their later analysis but they do claim to have ‘demonstrated an independent association between hospital trial participation and outcomes’ (p. 662). Looking at the other side of the coin, Chen et al. 99 suggest that the reasons for lower uptake of new treatments at community hospitals and centres compared with teaching/research facilities ‘may be lower awareness of treatment advances, lack of multidisciplinary expertise or availability of specific treatments at the facility, or referral bias’ (p. 837).
Studies reporting a specific impact
Where the focus is on specific impact linked to the research engagement of individual clinicians, mechanisms such as a greater awareness and understanding of the specific research findings might come into play, with clinicians being more likely to implement the findings of a specific study on which they were working. Meineche-Schmidt et al. 111 claim that ‘experienced GPs [general practitioners] import treatment modalities from the trial into the daily clinic’ (p. 1124), the inference being that their research experience allows them to understand those modalities better than their colleagues who are not working on that study.
At the institutional level, the processes and protocols applied in a specific study (not counting any impact from the regimens in the intervention arm) can lead to improved health-care performance. Two studies that are quite difficult to classify demonstrate this. Karjalainen and Palva107 highlight the role of study protocols:
The patients in the trial area benefited from the clinical trials, which suggests that the use of a treatment protocol improves the end results of treatment. In other words, the results favour a systematic treatment schedule in preference to a schedule determined by the free choice of a clinician.
p. 1069107
Janni et al. 104 show how being part of a study meant that 80% of centres who responded to their survey claimed to have noted an improvement in their professional knowledge because of the regular flow of information via newsletters, study meetings, etc. According to Janni et al. ,104 ‘these results support the hypothesis that carrying out the study has a positive effect on the current medical care of participating patients, irrespective of the knowledge gained later from the actual findings of the study’ (p. 3665).
Expanding the analysis with additional evidence from the wider review papers linked to the by-product concepts
The above section described a range of suggestions, or speculation, about the mechanisms through which research engagement might result in improved health-care performance. However, none of the suggestions were made by more than one or two authors, or supported by a range of evidence.
Here, therefore, we draw on both the focused review and the wider body of evidence, and, in particular, two analyses published in 2011. These are by Krzyzanowska et al. 95 from the Selby set of papers, and by Kahn et al. 131 from a set of three papers131–133 describing a RAND Corporation (RAND) study funded by the National Institutes of Health (NIH) in the USA that explored a range of issues related to translational research. Neither of these two analyses specifically associate themselves with the by-product concept, but some of their thinking helps to inform the analysis here. The first provides useful insights into the range of mechanisms that might be at work. The second demonstrates how an increasing recognition of the ‘by-product’ type benefits from research engagement has encouraged further thinking about how best to build on and regularise these opportunities.
Krzyzanowska et al. 95 describe various potential mechanisms through which outcomes may be improved in research-active settings. These include the possibility that research-active staff may be systematically different from their peers in non-research-active settings. They offer several possible explanations, including the personal characteristics of the staff, multidisciplinary collaboration, additional training and education, or through specialisation. They suggest that ‘acquisition of new skills or technology as required for some clinical trials may lead to their systematic application and outcome improvements throughout an institution or health system’ (p. vii11). 95 They also claim that the conduct of clinical trials often requires new physical infrastructure such as specialised equipment that might remain once the research is complete.
Kahn et al. 131 suggest engagement in research can benefit community clinicians by helping them to remain current with new innovations and to learn and implement what is best for their patients. They also suggest some factors that might be particularly relevant for the USA context but can have wider application. They claim that many patients seek out clinicians who are at the cutting-edge of research and thus provide access to the best diagnostic and treatment options: in a competitive system this could mean that a higher proportion of patients are receiving care from such clinicians. More generally, they suggest that ‘Clinician participation in research builds infrastructure that can facilitate engagement with managed care’ (p. 2). 131
Absorptive capacity and broader impacts including attitudes and capacities
The idea that those conducting research have an attitude, capacity and interest in examining and possibly adopting the research conducted by others fits most neatly with the original absorptive capacity concept developed by Cohen and Levinthal37,38 in relation to the benefits for industrial companies of funding some of their staff to conduct research. We saw this in the focused review in, for example, the surveys of nurses113 and therapists. 117 The concepts here are consistent with our broader level of impact because the research that is absorbed is part of the global stock of knowledge rather than research produced by themselves. 44
Further support for these ideas comes from other papers from our wider review papers, often from studies exploring research utilisation – a key step in the pathway between research engagement and improved performance. Belkhodja et al. 132 conducted a rigorous study of the extent and organisational determinants of research utilisation in Canadian health service organisations that did not quite meet our inclusion criteria for the focused review because it did not demonstrate improved health-care performance. This paper drew on the absorptive capacity concepts (though primarily the revised version from Zahra and George41). Following a large-scale survey, the regression analysis Belkhodja et al. 132 conducted on a range of factors linked to research utilisation by managers and professionals revealed that previous research experience was one of the two most important factors.
Various studies examined research engagement and research utilisation by nurses, including a trial by Tranmer et al. 133 This was the one study included in the Clarke and Loudon review31 that we felt did not quite meet our inclusion criteria because it mainly focused on attitudes towards research and research utilisation per se, rather than on changes in health-care processes or outcomes (research utilisation scores were higher for nurses who had been involved in research participation units than for nurses in units that did not participate). Niederhauser and Kohr134 reported that conducting clinically relevant studies was seen by clinicians as a facilitator of research utilisation. Tsai135 also demonstrated a link between research participation and research utilisation. Other smaller studies (e.g. that of McCoy and Byers136) amount to ‘insider accounts’ which provide a comparison only over time rather than with other comparable institutions, but which, nevertheless, add further examples of how increasing the research engagement of nursing staff can help facilitate EBP. Some studies applying the Payback Framework explicitly used the concept of absorptive capacity to help explain how engagement in health research can lead to benefits beyond the direct ones from the research findings. 44
There is therefore some evidence of an association between engagement in research and being open to and implementing new ideas. Overall, it seems reasonable to suggest there is growing evidence to support the idea that research engagement by itself seems to engender research utilisation, which is a step towards improved health care.
It will always be difficult to establish the direction of causality between conducting research and having an attitude of being open to, and seeking out, new ideas on how to improve patient care. However, there is some evidence of the two going together, at least in the accounts of some medical academics. 137 Some evidence to support this comes from a study in the USA in which a team of researchers interviewed 24 clinically excellent internal medicine physicians at eight academic institutions. 138 The collation of these interviews provides self-reported evidence from these physicians about how their practices were influenced by the emphasis on research, including one who claimed:
I think in an academic model you have to still maintain an academic interest which I would define as not just wanting a status quo, but wanting to always get better and also having the interest of developing better practices.
p. 482138
One among many reasons claimed for the successful improvements in health-care quality achieved by the VA in the USA (see Chapter 6 for a detailed discussion) is that the funding provided to medical academics to allow them to spend some time conducting research made the VA an attractive destination for some of the highest quality physicians. 139
Absorptive capacity and specific impacts
Perhaps the most basic aspect of the idea that participation in a study leads as a by-product to improved health care occurs at what we have classified as our specific level of impact, and comes about because those who conducted the study implement its specific findings more rapidly than others. However, the mechanisms at play here do not relate to the classic version of absorptive capacity discussed above. Instead they are more akin to those identified as belonging to the ‘stages of innovation’ model and probably relate to the superior knowledge and understanding of the findings, and perhaps trust in their validity, that is gained through participation in the specific study. This was shown in the Meineche-Schmidt et al. 111 study from the focused review. Meineche-Schmidt et al. 's111 conclusions reflect an earlier report of a survey of 54 British radiology units140 that noted that almost half the respondents participated in clinical trials and claimed that centres:
. . . able to participate in clinical trials were more likely to change practice than those who were not. Most of the 63% of respondents using a one or two fraction regimen had participated in clinical trials, and “trial results” was the most frequently cited reason for change in practice.
p. 72140
Despite these mechanisms being a by-product of research activity, they are now increasingly recognised in research systems and attempts are being made deliberately to foster them in some of the initiatives described later. Daniel et al. 141 claim that participation in ‘national demonstration projects by hospitals provides opportunities for learning, collaboration, and early improvements’ (p. 301). There are of course important differences between research engagement, and activities merely aimed at promoting implementation. We could use Rogers' distinction,52 drawn from Myers,142 between ‘(1) experimental demonstrations, which are conducted to evaluate the effectiveness of an innovation under field conditions, and (2) exemplary demonstrations, which are conducted to facilitate diffusion of the innovation to other units’. And even with Rogers' ‘experimental demonstrations’,52 as with similar approaches, the scope of the mechanisms that link research engagement to specific impacts will be limited because clinicians will not improve health care solely by implementing studies in which they have been involved. 143
Nevertheless, it is possible that we can expand the concept of a specific impact because we can also find evidence from the wider review of a local effect in which health-care staff in one region of the English NHS had a particularly favourable attitude towards a local programme of research. 144 In such instances, it is the fact that health-care staff know (and trust) the local researchers that might create the impact: they do not actually have to have conducted the research themselves. The diffusion of innovation literature has long put emphasis on the role of informal social networks as a means by which ideas are spread, with the research in the medical field by Coleman et al. 145 making an important contribution. It is possible to see how a wide distribution of research-engaged medical academics within a health-care system could help the uptake in their localities of the specific treatments with which they had been involved in trials.
What we have said about attitudes and behaviour change emphasises the positive impact of engagement in research, as did the majority of papers in the focused review. It is also important to note, however, that the wider review provides evidence that many clinicians do not view research engagement as a way of improving the health care they provide. Following an extensive series of interviews, Kahn et al. 131 list many barriers that clinicians in the community identified regarding participation in research, including the view that the study questions were not pertinent to topics of interest to themselves, their practice or their patients. Furthermore, they report ‘Clinicians need reassurance that research engagement does not threaten the doctor–patient relationship’ (p. 6).
Processes of care and building team and institutional infrastructure
The analysis of building capacity through research activities can be taken further, and here there can be a building from specific impacts to broader impacts. It is claimed that the process of designing and conducting research involves the accumulation of knowledge and skill (and sometimes teams) that will be necessary for the research, but can also be a mechanism for improved health care. Several studies in the focused review described how the protocols and processes developed for research studies could then be applied more widely to the clinicians' patients outside as well as inside the trial, thus providing improved health care. 104,107
Further evidence to support these concepts comes from Haynes et al. 146 Haynes et al. 146 conducted an international study exploring whether or not the implementation of a 19-item surgical safety checklist (based on research and designed to improve team communication and consistency of care) would reduce complications and death. The study found the implementation of the checklist was associated with reductions in the rate of death and complications. Although the implementation of the checklist in this study was the primary purpose and not a by-product, it provides evidence to support the concept of by-product benefits from the protocols developed and used in primary research studies. Furthermore, the improved processes from one study can then be carried over through being incorporated into aspects of the protocols used in subsequent studies.
Some studies from the focused review take the argument even further and explain attributes of the setting in which care is delivered, such as accommodation, equipment and personnel, which are brought in to perform research-related activities and may remain in place after the research is completed. For example, Rochon and du Bois116 suggest that their model ‘rather implied the hypothetical assumption that patients in study hospitals have a better outcome than patients in non-study hospitals because hospitals participating in trials accumulate knowledge and develop special infrastructures associated with study participation’ (p. vii18). They further suggested that their hypothesis was supported by the data that the observed benefits from study participation were not limited to the patients in the trial, but were obtained by other patients in the trial hospitals. The medical academics interviewed in the US study reported above also thought that although research took up time that could be used directly in health-care provision, it also brought in more resources that could be used to provide better clinical care. 138
Building and spreading the capacity to benefit from the by-products of engaging in health research
We have seen that Zahra and George41 deliberately reconceptualised the absorptive capacity concept. We believe one facet of the original definition that is often overlooked is the key to our analysis of how the absorptive capacity concept contributes to understanding the mechanisms. This is the notion that it is the actual processes of research engagement that help prepare health-care staff and organisations to be able to absorb and use research and thus improve health care. Although this older notion still seems appropriate, we also agree with much of the newer literature on absorptive capacity that promotes the deliberate fostering of the capacity of institutions to be able to absorb research so as to produce benefits.
Along somewhat similar lines and as the type of issues discussed so far in this chapter are perceived to be important, then increasingly there are deliberate attempts to promote ‘by-product’ benefits. This might, in particular, be viewed as a desirable consequence of initiatives designed to boost clinical research such as the US NIH's 2003 Roadmap for Medical Research. This is the initiative discussed in the RAND series of articles referred to earlier in this section. 131,147,148 Inevitably, such initiatives overlap with the network and collaborative approaches discussed later in this chapter. We did not identify any articles that show whether or not the research engagement encouraged by this NIH initiative has led to improved health-care performance. However, an article from 2012149 describing the progress made set out the wide scope and aims of the NIH in promoting this. The authors claimed that by increasing community-based provider participation in research (CBPPR) in various ways:
. . . through Clinical and Translation Science Awards, federally funded provider-based research networks, and other mechanisms, the NIH has sought to advance the science of discovery by conducting research in clinical settings where most people get their care, and accelerate the translation of research results into everyday clinical practice.
(p. 3)149
Mechanisms and further evidence linked to insights from the network concepts
Mechanisms from the research network papers in the focused review
The set of eight papers in the networks section of Table 2118–125 describe the situation broadly in the middle of our spectrum of intentionality. We again apply the broader/specific categorisation, but on this occasion start with the specific category because it seems that some mechanisms at work here, especially in the adoption of research findings from the specific research in which the practitioners participated, are similar to those described above from the by-product category.
Studies reporting a specific impact
At an individual level, two studies describing the impact of different trials in the PBRNs in the USA, Rhyne et al. 123 and Siegel et al. ,124 both show how clinicians involved in a trial in the network adopted the findings from their specific trial to a greater extent than did other clinicians. In both cases the authors thought the network had influenced this rapid adoption. Relevant factors included the increased relevance of the research, and the increased knowledge and understanding of the findings gained through participation in the research. Thus, Rhyne et al. 123 claim ‘Several factors could have contributed to the reported changes; (1) focus of the PBRN study on an area of concern for clinicians; (2) interactive clinician education. . . (3) rapidly diagnosed condition; (4) “teachable moment”; (5) increased clinician self-efficacy’. The authors also note several of these components of PBRN ‘while not exclusive to PBRNs, are accomplished more effectively and efficiently through the longitudinal structure of a PBRN’ (p. 501).
Along similar lines, Siegel et al. 124 claim that their results ‘suggest that implementing a study in a PBRN had a positive and lasting effect on the study practitioners. This effect also shows that PBRNs can be successful in translating research into practice’ (p. 523). The evaluation by Warnecke et al. 125 of a research network created by the US National Cancer Institute (NCI) [the Community Clinical Oncology Program (CCOP)] reinforces this message and adds further detail. They suggest the CCOP model in which community physicians' participation directly in clinical research probably works best is when the science is changing rapidly and when keeping current with these changes is critical: ‘The theory by which the treatment is justified is intelligible, thereby reducing uncertainty about the new treatment, because participating physicians attend scientific meetings where the rationale is discussed as protocols are developed’ (p. 338).
At an institutional level, Knudsen et al. 121 show that the centres involved in a trial, and who had therefore applied the protocol, were over six times more likely to have adopted the drug trialled 24 months after a baseline interview than centres in the network that had not been involved in the trial. They claim that this specific impact, at the institutional level, fits with the theory from Rogers69 about the ‘trialability’ of innovations being important in allowing organisations to gain experience of them on a limited basis: ‘Clinical trials offer such experience, where an organization does not have to commit to longer-term adoption but can try an innovation for a limited period with a specified number of clients’ (p. 311). Perhaps the main distinguishing feature between this situation and some of the by-product mechanisms described above is that centres in networks are likely to be more frequently involved in clinical trials than others, and, therefore, might have this opportunity more frequently.
The study by Knudsen et al. 121 was a follow-up to an earlier study120 conducted 6 months after the baseline interviews and this also found those centres involved in the trial had adopted the drug trialled to a much greater extent than either other centres not involved in the trial, or other similar centres not in the network. This provided strong evidence about the specific impact of research engagement through the network, but also showed, in this instance, that there were no broader benefits from research engagement through involvement in the network where that did not involve trial participation. A further issue in this case was that those centres that had been involved in a trial of a different treatment did not adopt it any more rapidly than other centres. In contrasting the adoption pattern of these two treatments that had been the subject of trials, the authors noted that the second treatment had been around for longer and had been more widely adopted prior to the trial conducted in the network. Therefore, they suggest the second treatment was ‘comparatively farther along the “S-shaped curve” that characterises the diffusion of innovations (Rogers, 1995). It may be that different models are needed to understand the adoption of innovations in the earlier stages versus the later stages of diffusion’ (p. 328). 120 This is an important point for our analysis; it suggests that the experience of conducting a trial on an innovation might not encourage adoption of innovation (through the knowledge gained about it) when that innovation is already quite widely adopted. This is consistent with the findings of Andersen et al. 98 in the focused review (see Chapter 4).
Studies reporting a broader impact
At an institutional level, three network studies show that institutions in a research network adopted various forms of EBP to a greater extent than comparable institutions not in the network. 118,119,122 Each of these studies, however, highlights different facets of networks as a mechanism. Abraham et al. 118 suggest that centres within networks may build up a record of implementing research findings: ‘These findings support the idea that exposure to the process of implementing innovation treatment technologies over time may influence concurrent and future decisions to adopt EBPs’ (p. 281). Laliberte et al. 122 refer to factors such as network membership possibly increasing the likelihood of physicians recommending guideline concordant treatment and suggest that they ‘may be useful in improving the quality of care’ (p. 471).
The study by Carpenter et al. 119 raises issues that are of wider interest. For example, whereas institutional membership of a network was shown in the multivariate analysis to be associated with greater uptake of a new treatment, other organisational factors previously associated with cancer care quality – teaching affiliation and surgical volume – did not contribute independently to the early adoption of innovative breast cancer treatments. The precise nature of the mechanisms at work are difficult to disentangle because the study examined the effect of membership of two research networks with some overlapping membership, with the highest adoption rates being for those affiliated to both networks. However, a higher adoption rate was associated with membership of the American College of Surgeons Oncology Group (ACOSOG) alone than with membership of the other research network alone. The range of reasons suggested by Carpenter et al. 119 include the integrated, programmatic approach to improving the quality of cancer care in organisations affiliated to the ACOSOG, plus the professional education, training and national meetings provided by the college ‘which would reinforce the external communication associated with this affiliation’ (p. 177). The authors examine adoption patterns over several years, and show that the impact of network membership was greatest in the period nearest in time to when the intervention was approved by Medicare. They, too, draw on Rogers52 and claim ‘These findings reinforce diffusion theory, which specifies S-shaped diffusion curves that differ between early adopters and later adopters’ (p. 176). 119
Expanding the analysis with additional evidence from the wider review linked to the network concepts
The mechanisms associated with research networks represent a partial establishment and use on a regular basis of some of the points discussed in the by-product section. A range of studies from the wider review provide further evidence to support the examples from the focused review of how the PBRNs themselves might be seen as effective mechanisms. These include two papers explaining how there were some improvements resulting from the Dental Practice-Based Research Network. 150,151 The papers indicate various ways in which mechanisms associated with the operation of this dental network might lead to improvements in health care, including through attendance at network meetings.
In 2005, Mold and Peterson152 analysed the role of PBRNs and suggested that PBRNs operate at the interface between research and QI. Mold and Peterson152 claimed that ‘PBRNs appear to be evolving from clinical laboratories into learning communities, providing grounds for generalisable solutions to clinical problems, and the engines for improvement of primary care delivery systems’ (p. S12). Their discussion of this evolution, and of the mechanisms through which the improved health-care performance is achieved, ties in with the distinctions we make in our matrix. First, they make the ‘by-product specific impact’ point that when clinicians participate in a project they are more likely to use the results. Second, they show that there are a range of mechanisms (including practice facilitators, project development meetings and network convocations) that allow two-way knowledge exchange through the research network, enabling clinicians to engage with question generation and the conduct of the resulting research, and ensuring therefore that the research is more relevant to practitioners.
Mold also co-authored a systematic review of the role of practice facilitators that showed that health-care professionals who assist primary care clinicians in research and QI projects play an increasingly important role in PBRNs. 153 A 2004 report analysed publications from several PBRNs and suggested that these networks ‘overcome the second translational block. . . [and] have the potential to help translate our rich basic and clinical knowledge into patient care’ (p. III-47–III-48). 154 A further illustration of the expanding and evolving role of PBRNs, and, potentially, their part in enabling research engagement to improve health-care performance, comes from the suggestion in a 2011 commentary by Williams and Rhyne155 that they should be renamed as ‘Health Improvement Networks’.
Three studies119,122,125 included in the focused review describe cancer networks. One of these networks, NCI CCOP, is also a PBRN. Its work is discussed in detail in a descriptive study in which Minasian et al. 56 identify various mechanisms through which this programme operates, including:
-
the provision of relevant training and educational opportunities thought necessary to maintain high-quality research
-
two-way communication between academic researchers and community physicians involved in the research network including at conferences where topics include planned or ongoing trials, trial management, etc., and at which community physicians may gain greater understanding about why certain research protocol features are important.
Minasian et al. 56 conclude that ‘The CCOP network is an example of how PBRNs can be mechanisms to effectively address the challenges facing clinical research and practice; continue medical progress that benefits patients, providers, and society; and more effectively translate research into practice’ (p. 4446). 56
Further evidence from the wider review about the impact of cancer research networks comes from Aiello et al. ,156 who show how integrated health-care delivery systems in the US Cancer Research Network rapidly adopted the positive results of randomised controlled trials (RCTs) on adjuvant aromatase inhibitor treatment even before the full process of guideline development had been completed. Its findings are consistent with other studies from cancer research networks which were better controlled.
The wider review, therefore, does provide more evidence about how research networks act as a mechanism (operating through a variety of detailed approaches) through which research engagement can improve health-care performance.
Limitations and the potential importance of context
The wider review also allows a more rounded assessment because some studies reveal limitations to what can be achieved by research networks. Bodenheimer et al. 157 analysed data from 17 PBRNs who received grants to study and encourage the adoption of interventions to improve patients' health-related behaviours. Here the focus was primarily on the role of practice-based research per se as a facilitator of practice improvement, rather than on the role played by the networks themselves. Lessons emerged about sustainability of the approach and a potential conflict between improvement goals for all practices and the requirement that some practices be in a control arm.
Teal et al. 149 analysed the implementation of the US NIH's Roadmap for Medical Research (mentioned above) and report that the community practice organisations varied in how far they were able to achieve the objectives of the CBPPR initiative. They highlight the importance of context and claim that, by doing research in their own practice settings, providers may feel a greater sense of ownership and trust in clinical research results, which in turn may increase their commitment to acting on research findings, ‘CBPPR does not occur spontaneously or effortlessly. To participate in research, community-based providers often must implement systematic changes’ (p. 18). The authors continue by explaining that providers ‘must develop an organizational culture (shared values) that supports CBPPR and evidence-based practice. . . CBPPR is difficult to create and sustain in the absence of management support from local, affiliated hospitals and health systems’ (italics added). Almost echoing Williams and Rhyne,155 Teal et al. 149 suggest that it might be useful to frame CBPPR as ‘quality enhancement’ rather than ‘research’. They also note several ways in which the research network of the NCI had in fact actively engaged with the chief executives of participating hospitals and health-care systems to provide support.
Similarly, the study by RAND that examined the circumstances necessary for the NIH's Roadmap for Medical Research to be successful identified various weaknesses in current research networks, especially the PBRNs, that limited their scope for achieving successful health-care improvements. 148 They suggest that ‘PBRNs are not in a position to provide sufficient infrastructural support to consistently engage clinicians over time’ (p. 2).
Studies from outside the USA
The wider review also identified some evidence of a growing international interest in the benefits that might come from research networks. In the UK, a comparative case study of four primary care research networks158 drew on guidance for managing clinical networks published by the NHS SDO programme159 and concluded that generic primary care networks can help to integrate academic and service initiatives. The authors suggest that primary care organisations ‘may be ambivalent about engaging in research if they see it as a distraction from service priorities’ (p. 239),158 and may fear that research participation will become a barrier to practice improvement, citing Bodenheimer et al. 157 to support this claim. This study focused on research process and did not report any health-care outcomes; it was also conducted prior to the introduction of the system of research networks that is now a central feature of the NIHR.
A paper by Darbyshire et al. 57 in the 2011 supplement of the Annals of Oncology provides a detailed descriptive account of the development and role of clinical research networks within the NHS, and concludes that there is compelling evidence that ‘Patients and health-care professionals from all parts of the country are able to participate in and to benefit from clinical research and that health research and patient care are being integrated’ (p. vii42). In addition, an abstract presented in November 2012 describes the preliminary analyses from a study that aimed to determine whether or not the increase in National Cancer Research Network colorectal cancer research activity was associated with improved outcomes for colorectal patients in the NHS. Preliminary analyses indicate there was a significant reduction in 30–day mortality and improvements in survival for patients treated in high research activity hospitals compared with those with no activity. 160 Whicher et al. 161 describe how a research network in Ontario, Canada supported comparative effectiveness research that linked decision-making with evidence generation in a transparent and efficient way. A study from Germany162 describes how evidence from the USA about the benefits of networks is being adopted in Germany with the creation of a morbidity registry in the field of general practice. It seems likely that the international evidence about the impacts from research networks will continue to grow.
Mechanisms and further evidence linked to insights from the ‘intervention’ papers: collaborations and action research
Mechanisms from the intervention papers in the focused review
There are four ‘intervention’ studies in the focused review. 126–129 These are studies in which the degree of intentionality (about achieving health gain from research engagement) is high. The interventions described include collaborative approaches, QI research initiatives, participatory and action research, and organisational approaches where the intention is explicitly to produce improvements in health-care performance as a direct consequence of the research engagement of the organisation.
These papers largely describe the adoption of the specific research that featured in the intervention. However, they also raise issues about how broader impact can be achieved throughout an organisation which, not surprisingly, resonate with those explored above about how research networks operate: issues such as the importance of effective collaboration and the need for a supportive context.
Thus, in a report of QI research conducted in the Department of Veterans Affairs, as part of its QUERI, Hall et al. 128 conclude: ‘Providers believed that the collaborative produced a “culture change” from patient-centered to family-centered care and viewed program leadership and health services researchers' involvement as crucial for success’ (p. S18). This illustrates the importance of collaboration as a mechanism through which research engagement can improve health-care performance. In addition, a report of an action research project129 also illustrates the value of involving stakeholders in research aimed at improving local clinical services.
A study by Chaney et al. 126 of another VA initiative had an extremely complex protocol; it describes collaboration at more than one level – a collaborative approach to redesign was taken to encourage the use of an evidence-based collaborative care model.
There are also lessons from a mixed/negative intervention study. Goldberg et al. 127 examine a trial to see whether or not delivery systems could be designed to conduct CQI as part of clinical care provision. It involved researchers working with practitioners and experienced initial success. However, the successes were reversed when resource constraints meant the support structures associated with the trial were discontinued. According to the authors, health-care systems can, by conducting such trials, ‘become learning organizations. CQI programs of all kinds will likely never flourish, however, until QI and reimbursement mechanisms have become better aligned’ (p. 156). They claim, therefore, that system change is superior to educational interventions.
Expanding the analysis with additional evidence from the wider review linked to the concepts of collaboration and action research
Many reports on collaborative or action research studies did not get into our focused review for a number of reasons, but nevertheless can make a contribution that adds to the insights. There are overlaps within this category, as when collaborations may adopt a participatory action approach, and between this category and others, but we discuss each group separately.
Collaboration
There is a large literature on collaborative approaches. Over 40 papers met the criteria for our full-paper review, but most did not get into our focused review for various reasons (e.g. they did not cover the whole pathway from research engagement to improved health care, and/or they were descriptive insider accounts).
Here we draw in particular on studies that were included in major reviews, although in some cases the reviews were themselves conducted to inform evaluations. We start by considering an example of the latter from the field of health services management, an area in which it is often thought difficult to achieve evidence-based management163 let alone to demonstrate that research engagement by managers leads to improved health-care performance. In 2012, Bullock et al. 164 published an article, ‘Collaboration between health service managers and researchers: making a difference?’ describing key findings from their formative evaluation of the SDO's Management Fellowship Programme that was set up to ‘encourage greater engagement, linkage and exchange between research and practice communities in health-care management’ (p. 12). 165 Here the main mechanism is a relatively intensive approach in which an NHS manager is placed with a selected SDO-funded academic research project as a Management Fellow, usually for 12 months full-time equivalent over the life of the research project.
The evaluation concludes that all those involved thought that the Management Fellows had improved the quality and relevance of the research and, in a small number of cases, were beginning to develop capacity as conduits of research findings in the workplace. The range of capabilities developed by the Fellows included being able to assess, appraise and use research. However, there is also some evidence of ‘improved management skills and improved personal confidence’ (p. 14). Nevertheless, a number of limitations were identified, including that ‘engagement structures were often not in place’ (p. 15). Overall, this evaluation suggests that some broader impacts are beginning to emerge from the research engagement, but no evidence is given of improved health-care performance.
Bullock et al. 165 also conducted a targeted review of previous research partnership programmes which found that there ‘were a number of existing or past programmes which focused on improving implementation, using a collaborative approach, but few fitted the model of the SDO Management Fellowships and fewer still were evaluated’ (pp. 29–30). 165 This indicates that this particular mechanism has not been used very much. As a more general context to the article describing the evaluation, Bullock et al. 164 highlight some of the most relevant previous studies, and claim ‘The few evaluations of collaborative research programmes, largely from outside the UK, indicate positive outcomes’ (p. 3). Three articles are cited to support this statement,166–168 but none of these three studies met the inclusion criteria for our focused review.
In the first, Denis et al. 166 evaluated a Quebec Social Research Council (CQRS) grant programme to encourage long-term collaboration between researchers, decision-makers and practitioners that aimed to transform scientific knowledge production and professional practices. Both researchers and practitioners agreed their involvement within collaborative research teams contributed to the development of new skills and practices, and fostered the implementation of new modes of knowledge production and intervention. Both groups, however, encountered some difficulties in putting the dimensions of the collaboration that they valued into practice.
The second involved research programmes funded by the CHSRF in which health system managers and public policy-makers participated in research programmes. It was, on balance, a beneficial experience according to the participants. 167 A key finding for our review is that several interviewees referred to the participating decision-makers becoming ‘more reflective about their activities’ (p. S2:30). Again, this is a long way from showing improved health care, but it is a step on the pathway.
The Canadian examples, combined with the theoretical position of linkage and exchange,64 developed by Lomas at the CHSRF, illustrate a considerable interest in developing collaborative programmes as a way of improving health-care performance through research engagement. A considerable degree of collaborative activity has been further stimulated by the Canadian Institutes of Health Research (CIHR).
Other significant examples from Canada include an evaluation of the Need to Know Project in Manitoba, Canada, that was funded by the CIHR,169 and successful attempts to engage health-care managers in research in Canada through the Executive Training for Research Application (EXTRA) Program funded by the CHSRF. 170 In the latter, the 26 participants in the second cohort were surveyed at entry and at graduation nearly 2 years later. The proportion who thought their knowledge of research-based evidence was excellent or very good increased from 17% to 90%, and the proportion who thought their ability to promote the use of research in evidence in their own organisation was excellent or good increased from 16% to 86%.
Another of the extensive attempts to evaluate partnership initiatives in Canada involved a further evaluation of the CQRS and showed how the collaborative nature of the research undertaken increased its applicability to health and social service institutions. 171
However, the findings are not always positive. Work in Canada on the collaborative approach has also identified situations in which the engagement does not always lead to increased use of the findings compared with those not engaged in the study. Kothari et al. 172 examine the responses to a report on performance in breast cancer prevention by three units that were involved in the development of the report, and three units for whom it was relevant but were not involved in its development. The authors report that the interacting units had a greater understanding of the report's analysis and attached greater value to it, but:
. . . interaction was not associated with greater levels of utilisation in terms of application. Both interacting and comparison units used the research findings to confirm that their ongoing program activities were consistent with the research findings, and to compare their performance relative to other units.
p. 117172
There are also partnership initiatives in the USA that have some similarities with those described above from Canada. Two studies of these were included in the focused review. 126,128 There are a range of others that did not get into our focused review. For example, Gold and Taylor173 indicate that 30 out of a set of 50 collaborative projects had some operational effect, but the details were not sufficiently well described for inclusion in the focused review. In the USA, there are also more comprehensive and long-term organisational approaches such as the VA. (These are considered in the next chapter.)
The impact that can be made by the collaborative approach in general has been demonstrated in several systematic reviews, including those by Lavis et al. 85 and Innvær et al. 86 (described in Chapter 2). Many of the papers in these reviews were not relevant for inclusion in this review: first, because they focused on policy-makers' use of research rather than on health-care performance and, second, because the nature of the collaborations meant it was not always clear how far it focused on engagement in research, as opposed to collaborative efforts after the research had been conducted. Nevertheless, Lavis et al. 85 provide quite a strong endorsement for the principle of collaboration with their conclusions that ‘interaction between researchers and health-care policy-makers increased the prospects for research use by policy-makers . . . relationships with or involvement of health care staff in the research process increase the prospects for research use by managers’ (p. S1:39).
Although there was less evidence for the second of these conclusions than the first, the second is more relevant to our review. Just three papers were found to support this, and although none of them went quite far enough to meet the inclusion criteria for our focused review, they all make important points about collaboration as a mechanism through which research engagement might lead to improved health-care performance. Harries et al. 174 and Elliot and Popay175 describe the same study and attempt to identify factors facilitating or impeding evidence-based policy-making at a local level in the NHS. They show the importance of continued dialogue between researchers and those in the NHS who commission the research and who could become involved in the research process. The third study was an early one by Glaser and Taylor176 who examined a range of applied projects funded by the National Institute of Mental Health in the USA. This paper provides support for the claim that engagement in research by members of an organisation increases the relevance of the research and the likelihood that its findings will be used. Specifically, Glaser and Taylor176 claimed of the practitioners in the host agency, if they ‘were actively involved in the development of the project, if they had good rapport with the research team and did not feel exploited, if their responses to the developing findings were sought and heeded, then there was greater likelihood that they would use the findings in their practice.’ The authors also suggested that such an endorsement also ‘would be instrumental in promoting use of the findings in comparable agencies elsewhere’ (pp. 144–5).
Further examples of how the collaborative approach contributes to progress along the pathway from research engagement to improved health-care performance were included in a review33 of studies assessing the impact of health research. Two of these studies show how the Health Technology Assessment (HTA) programme in Quebec involved some levels of collaboration between researchers and users of the HTAs, and the reports produced made some impact in the health-care system. 177,178 In the third paper, the Alberta Heritage Foundation for Medical Research assessed the impact of a series of projects that it had funded and found that there was some evidence to conclude that research teams that included decision-makers or users were more successful than those that did not. 179
Furthermore, there are a few case studies within larger sets of case studies of research impact assessments46,180 that might have met our inclusion criteria but it did not seem suitable to include such individual case studies in the focused review. Nevertheless, they provide further demonstrations of the health-care benefits that can come from the collaborative approach.
Evidence about the value of collaborations also comes in the form of the increasing number of initiatives from funders who are looking to encourage collaboration, including over research engagement. A good example of how this can work very effectively is provided by the research centre funded by the Medical Research Council (MRC) and the medical research charity Asthma UK at King's College London and Imperial College London. Following a site visit by the House of Lords Science and Technology Committee, the committee highlighted the importance of the collaborative approach in its 2007 report on allergy:
We visited a striking example of effective collaboration at the MRC-Asthma UK Centre in Allergic Mechanisms of Asthma. . . The Centre combined their research strengths into one cohesive strategy, with its research priorities informed by national consultations on asthma research.
paragraph 7.17181
Finally, an international dimension comes from a paper by Sankaranarayanan et al. 182 which describes various research and training initiatives organised by the International Agency for Research on Cancer (IARC) in collaboration with national institutions and local health-care services in many low- and middle-income countries. Right from the planning stages, one major emphasis in IARC research studies, it is claimed, was to organise them ‘with quality assurance of different inputs and in such a way that the infrastructure, trained human resources and quality assurance practices, in-built as part of the research programme, contributes to improving the capacity of local health systems’ (p. vii26).
Participatory and action research
Over 40 action research or PAR studies met the criteria for the full-paper review, but examination of the full paper often revealed that the study did not fully meet our inclusion criteria for research engagement (sometimes with issues about the boundaries with educational or QI intervention) and/or did not go as far as demonstrating some measurable improvement in health-care performance in terms of care provided or health outcomes. This pattern reflects the comments made in Chapter 2 from the systematic reviews by Soh et al. 87 and by Munn-Giddings et al. 89
However, the paper by Puoane et al. 129 was included in the focused review because it described a health gain in the 12 months after implementation of the action research. Some of the papers not included also demonstrate progress being made along the pathway towards improved health care. Many of the action research papers describe studies involving nurses or allied health professionals. These include the study by Ullrich et al. ,183 who claim that by employing a participatory worldview and action research they were able to introduce protected mealtimes for older people in a residential care setting in South Australia. Other studies describe a broader group of health-care professionals such as a study based in a centre for women's health in Boston, USA, that again did not give quite enough detail on improved health-care performance to be included in the focused review. In this study, the authors claim: ‘By engaging the workforce, collaborative interactive action research can help achieve lasting change in the health care workplace and increase physicians' and staff members' work satisfaction’ (p. 211). 184
The theme of increased staff satisfaction emerges in a number of action research studies, and can also link to helping develop positive attitudes towards the use of evidence and towards career development. Alligood185 reports on an action research study in the USA that was explicitly designed to improve care quality and nursing staff satisfaction. Alligood185 claims that ‘Attention to nursing practice stimulated career development among the nurses to pursue bachelors, masters, and doctoral degrees’ (p. 981).
Overall results from Chapter 5
Several major points emerge from the findings in this chapter:
-
The wider review demonstrates the existence of a diverse range of studies that in many cases support the idea that research engagement improves health-care performance, albeit with less well-controlled evidence and often going just some steps along the pathway from research engagement to improved health-care performance.
-
Much of the evidence from the wider review is broadly in line with that of the focused review, strengthening the suggestions and speculation about various mechanisms.
-
There is a growing literature on research networks, and an increasing understanding of the mechanisms through which they promote health gain and of the limitations encountered: context matters.
-
There is also a growing and increasingly influential literature on interventions, particularly on collaborative approaches.
Chapter 6 The role of health-care organisations in supporting research engagement
In this chapter we return to the analysis in Chapter 1 about how organisational support for research engagement can improve performance and the mechanisms through which that change occurs. We mentioned the finding by Crilly et al. 74 that learning processes and their relationship with organisational design were important themes in the health-care literature, but that the question of organisational form had received little attention. We also identified one or two examples of health-care organisations that had paid particular attention to engagement in research in their overall approach to improved performance. We expand these comments below, drawing mainly on papers from the wider review. First we explore what the literature tells us about performance measurement and performance management and their relevance for research engagement. Then we consider the complexity of health-care organisations and the need to explore how various mechanisms operate at various levels, but not necessarily in harmony. Finally, we discuss the use of mechanisms through which research engagement operates at organisational level to improve performance, and the importance of local contexts.
Performance measurement and performance management
In a recent review of the relevant literature, Fryer et al. 186 distinguish between performance measurement and performance management, noting that performance measurement provides information about the past whereas performance management ‘extrapolates the data to provide information about the future’ (p. 480). 186 These authors list the key features of a successful performance measurement and management system that are identified in the literature. These features include:
-
alignment of the performance management system and the existing systems and strategies of the organisation
-
leadership commitment
-
a culture in which performance management is seen as a way of improving and identifying good performance and not a burden that is used to chastise poor performers
-
stakeholder involvement
-
continuous monitoring, feedback, dissemination and learning from results
-
being flexible and capable of developing as the management style and culture evolves.
Another, earlier review was undertaken by Bourne et al. 187 as part of a study looking at the use of performance measures and how performance measurement impacts on performance. This review showed that:
-
there was little field research on the process of using performance measures to manage performance and the consequential impact on performance, and in practice the organisational context and the performance measurement content and process will all impact on the outcome
-
the significant number of the contextual, process and content variables identified mean that studying the impact of performance measurement on performance is difficult; and that a better question might be ‘Under what circumstances does performance measurement positively impact on organisational performance?’ (p. 374).
The case studies on which Bourne et al. 187 report were based in an organisation that had multiple business units operating in a similar manner in the same marketplace. The case studies fell into two groups. In average-performing business units, performance was managed using a simple control approach. Data were captured through the standard company systems, simply analysed and compared against company targets. The results were then communicated and acted on. The same approach was evident in high-performing business units, but this was not the main source of control. In high-performing business units, the simple control approach was used to verify performance at the end of the period, but the main drive for performance came from continual interaction with the performance data. Branch managers had their own data collection systems and indicators of performance. They created their own approaches to analysis and used these to project forward future performance. They then intervened using their knowledge of the situation throughout the period, rather than waiting for period-end feedback. Branch managers in high-performing business units were more sophisticated and explicit in their understanding of the drivers of performance, more intense in their communication and more varied in their courses of action.
In summary, Bourne et al. 187 concluded that the ‘interactive’ nature of how they managed performance was the most prevalent difference observed between high- and average-performing business units. This characteristic reflects the features of continuous monitoring, feedback, dissemination and learning from results identified by Fryer et al. 186
These insights from the management literature resonate with what we know about the drivers of large-scale change in health care. In a recent review, Best et al. 188 analyse the literature on large-scale transformations in health-care systems and provide a synthesis of what is known about underlying mechanisms. This is a realist review that identifies health-care systems as complex adaptive systems and was guided by the working assumption that a particular intervention triggers particular mechanisms in specific contexts. From the literature, Best et al. 188 identify a list of five ‘simple rules’ (broad principles that apply across all large-scale transformation programmes). These are (1) blend designated leadership with distributed leadership; (2) establish feedback loops; (3) attend to history; (4) engage physicians; and (5) include patients and families. Notably, the broad principles identified by Best et al. 188 include some, though not all, of the key features of successful performance management systems identified by Fryer et al. ,186 although the language used is very different.
In 1995, the VHA initiated a radical redesign that aimed to ensure predictable and consistent provision of high-quality care everywhere in the system and optimal health-care value. 77 Among other key factors such as the development of an integrated electronic patient record, the creation of a national administration system with regional and local autonomy, and the development of integrated service networks, a central aspect of the VHA's transformation was a new performance management and measurement system. As reported in numerous papers, the VHA's national performance on a range of measures improved dramatically in succeeding years, generally meeting or exceeding performance in other national health systems. 76,189–194
So what were the characteristics of this performance management and measurement system that led to this success, and is it legitimate to see it as an example of an organisational mechanism through which research engagement leads to improved performance?
In an account of the history of the VHA's performance management and measurement system, Perlin et al. 195 describe it as rigorous and data intensive, identifying key features such as:
-
a relatively small set of performance measures (including, but not limited to, clinical performance measures) that were chosen to be both ambitious and transformative – ‘stretch goals’
-
clinical performance measures that were based on evidence-based guidelines and generated through collaborations between in-house professionals and affiliated academic health systems in a collective effort ‘to systematically translate the best evidence into recommendations for best practice’ (p. 830) – a process that Perlin et al. 195 describe as ‘an evidence-based approach that extends the principles of evidence-based medicine to the administrative arena, a concept that might be termed “evidence-based quality management”’ (p. 830)
-
administrative staff and clinical managers who worked collaboratively to implement the performance measures locally
-
an annual performance contract that was created collaboratively by central management and field leaders and cascaded to clinicians and managers throughout the system in an attempt to reduce ‘you–us’ tensions
-
public reporting and regular feedback widely shared throughout the system. 195
Further crucial features of this system emerge from this and other accounts. Perlin et al. 195 note that this performance management and measurement system was supported by other features of the VHA, such as a system-wide information system that incorporated an electronic patient record and patient-centred care co-ordination that extended across the whole continuum of care. Kizer and Kirsh196 describe how this system was used prospectively to achieve data-driven process improvement and forward-looking change, and how regional and local directors were given ‘significant autonomy to innovate and design strategies tailored to their particular needs, resources and environments of care’ (p. 396). 196
As Perlin et al. 's195 comment about evidence-based quality management suggests, and the description above confirms, the VHA performance management and measurement system launched in 1996 was evidence-based and incorporated all the key features of a successful system described by Fryer et al. 186 The answer to the question posed above is therefore ‘yes’ on both counts, but with one important qualification. The literature confirms that the development of the VHA performance system does provide an example of a deliberate change to what Crilly et al. 74 called ‘organisational form’, and overwhelmingly the papers we looked at indicate that this change was associated with a deliberate policy to engage systematically in research not only to support the development of individual performance measures but also to support the design of the performance system as a whole. (An exception in the literature was Craig et al. ,189 who do not mention research engagement as a factor in the transformation of the VA.) The key qualification however is that, crucially, this change did not occur in isolation. It was part of a wider multipronged change initiative that sought to transform the whole VHA, linking performance measurement, information technology, organisational restructuring and aligned research efforts to improve performance. 139 It is to this broader concept of organisational change that we now turn.
The complexity of health-care organisations
The VHA is not the only health system in which the research function has been fully integrated into the organisational structure. Commenting on its success, Lomas192 also describes another US health system, Kaiser Permanente, in which health services research is used throughout to guide the decisions of its managers and clinicians. Many other organisational structures have been designed to support ongoing partnerships between researchers and users of research in health-care organisations in the USA and elsewhere. In the USA, in addition to the VHA (and its QUERI) and Kaiser Permanente, these include the University of Pennsylvania Health System,197 the United Healthcare Corporation's Center for Health Care Policy and Evaluation,198 and the Agency for Healthcare Research and Quality's (AHRQ) Integrated Delivery Systems Research Network (IDSRN). 173 In the UK, there are now Biomedical Research Centres and Units, Academic Health Science Centres, and the CLAHRCs. The CLAHRCs were established in 2008 to engage local health-care organisations and services in health research, and thereby improve their performance. They are therefore relatively new and are still being evaluated, but there are some emerging lessons about how these collaborations are operating and the mechanisms through which they hope improvements will be achieved. For example, Kislov et al. 199 suggest that CLAHRCs ‘have been charged with an ambitious goal of creating a new, distributed model for the conduct and application of applied health research that links producers and users’ (p. 8) and hypothesise that this requires the cultivation of new multiprofessional and multiorganisational communities of practice.
The common features of these organisational structures can be summed up, perhaps over-simplistically, in three words: complex, comprehensive and collaborative. For example, Garrido and Barbeau200 describe how Kaiser Permanente is supported by collaborations with external research institutions and is involved with a local specialist learning community. These collaborations depend on the active engagement of an in-house research and clinical champion and on the widespread participation of clinicians, and can call on data with high integrity and granularity and on statistical and analytic capacity. There is senior operational support. 200 Further details from papers describing initiatives for Kaiser Permanente and the other US organisations listed above are provided in Table 3.
Programme and some relevant papers | Programme aims and characteristics | Type of research supported | Facilitating factors | Limiting factors/challenges | Operational impact | Comment |
---|---|---|---|---|---|---|
AHRQ's
IDSRN USA 2000–5 Gold 2007173 |
To encourage formal partnerships between organised delivery
systems and researchers to support work on operationally
relevant studies to improve care delivery and
systems Each of the nine teams involved a lead organisation and sought to marry research to practice by having researchers embedded in (half of the teams) or collaborating with operational managed care plans, hospital-based integrated delivery systems, large multispecialty groups, or safety net providers A programme of broad scope specifically organisationally orientated Seen as a ‘laboratory’ that embedded research into real world settings |
More expansive than traditional health services research
– context and application major concerns About half of the IDSRN projects employed relatively traditional research methods that were applied to operational settings and needs (operational data assessment and validation; clinical intervention and assessment; research using integrated delivery system data; health services research within an operational system other than evaluation of clinical intervention) Other projects employed less traditional methods to look at systems analysis or the development of new delivery or management tools |
Pre-existing research capacity in operational system A prior history of operational systems and researchers/research organisations working together Internal champions for joint working between research and operational systems within the latter Responsiveness of funded projects to delivery system needs Ongoing support for multiple project phases Delivery of applied products and tools that helped users see operational relevance of projects |
Limited project funding Competing demands on potential research users Failure to reach appropriate audiences Involving outside researchers with more than one operating system was harder to co-ordinate but enhanced the potential of generating scalable knowledge. But, this potential not well exploited in IDSRN because:
|
Mixed – 30 out of 50 completed projects had an
operational effect or use Widespread diffusion was rare over the period studied |
Strengths of IDSRN: Moved beyond the traditional focus on university-based health services research; helped to develop stronger links between researchers and research organisations and operational organisations; provided a vehicle for AHRQ to become more ‘nimble’ in its funding Weaknesses of IDSRN: Opportunities to generate scalable knowledge and to synthesise findings from individual projects not exploited ‘Effective implementation arguably requires moving beyond single projects to develop longitudinal strategies that take maximum advantage of what health services research has to offer, while converting that knowledge into a form more accessible to users’ (p. 10) |
VHA (post-1995 reforms) USA 1995 to date Oliver 200776 Francis 2006139 Kizer 2012196 VHA QUERI USA 1998 to date Demakis 2000203 |
Linking performance measurement, information technology and
aligned research efforts to facilitate QI in a large complex
health system QUERI aims: To translate research discoveries and innovations into better patient care and systems improvement – a multidisciplinary, data-driven, QI programme Fully integrated within VHA's Strategic Framework for Quality Management |
Wide range of research approaches:
Health service research linked with QI using the six steps of the QUERI process Action-orientated research QUERI sits at the intersection of clinical practice and research |
Organisational structures: National administrative system with regional and local autonomy Updated resource allocation, creating incentives to deliver care in most appropriate setting Investment in primary care and in IT, including integrated electronic patient record Evidence-based performance measurement system – development of performance measures is bottom-up as well as top-down Comprehensive patient-care delivery system National research programme with established centres of excellence Other facilitating factors: Strong national leadership; physician buy-in; emphasis on knowledge translation; ongoing co-operation within organisations; culture of critical thinking and ongoing learning; early wins; effective teamwork |
Polytrauma, which challenges traditional disease-based
approaches Perils and pitfalls of performance measurement (Kizer 2012196) |
VHA outperforms other health systems on standardised measures of quality | Dialogue between clinical researchers and VA leaders occurs
through structured activities (such as QUERI) Research can be both rigorous and relevant ‘Knowledge translation . . . is a two-way interaction in which the research questions are shaped by measures of veteran need and system performance, and supportive structures allow research outputs to be implemented as formal practice and policy’ [Francis and Perlin (p. 69)139] |
Kaiser Permanente The Northern California Perinatal Research Unit USA Garrido 2010200 |
To create meaningful medical knowledge based on rigorous
research Research is an integral part of their mission |
Research and operations analysis – a hybrid research
model combining traditional research capability with a rapid
research function Research, reporting and ad hoc analysis |
Collaboration with external research institutions and
Involvement in a local specialist learning
community In-house research and clinical champion Data with high integrity and granularity Statistical and analytic capacity Widespread participation of clinicians Senior operational support |
None mentioned | Demonstrated effectiveness through:
|
The work of the Northern California Perinatal Research Unit is enabled by aligned interests and strong partnerships with clinicians, medical leadership, and regional executive operations leadership |
United Healthcare Corporation's Center for Health Care
Policy and Evaluation USA 1992 to date Peterson 1996198 |
To develop methods and systems to evaluate and enhance the performance of health delivery in areas of cost, quality and access | Health service research and cost-effectiveness studies, in
collaboration with United Healthcare-affiliated health plans,
academic and research institutions and government agencies and
including research on:
|
Unified medical management information system | Tension between centre-determined research agenda vs. supporting
ongoing and often short-term analytic and evaluation needs of
health plans and operational units Securing health plan and provider participation in studies Disseminating research findings effectively to encourage application |
Not described in any detail – illustrative findings from two case studies (on antibiotic use for Otitis Media and cost-effectiveness of endometrial ablation vs. hysterectomy) | Four reasons why health service research is important at the
United Healthcare Corporation:
|
University of Pennsylvania Health System – an academic
health system USA Bernard 2000197 |
‘Health & disease management’ – a
clinical improvement process aimed to ensure that the best
practices known to medical science are incorporated with minimum
variation over the whole continuum of care Rapid development model, adapting national guidelines Involves whole team, and students and patients |
Clinical evaluation and outcome management A ‘unique research opportunity for academic physicians who can adapt the use of process control measurement techniques’ |
Clinical leaders to ‘champion’ the
programme Educational, service and research missions of the academic health system Commitment of clinical and administrative leadership Patient involvement Care managers Co-ordinated resources throughout the system Aggregated programme performance data that are distributed to entire health system leadership and staff, supported by provider specific data |
Operational, economic, professional resistance Clinical performance measurement less well established or detailed than economic or service measures – which typically become the ‘bottom line’, dominating managerial priorities Clinicians' belief that delivering comprehensive care is inherently time-consuming Financial incentives not fully aligned with clinical goals |
Examples of clinical outcomes achieved from the programme
in:
|
A continuous improvement implementing best practice
protocols This is a team approach which is:
|
The key message from the literature examined in this chapter is that it is these characteristics, taken as a whole, that promote improvement in performance. In other words, this comprehensive approach facilitates more progress than a focus on single mechanisms through which research engagement works in isolation at various levels within a health system or health-care organisation to improve performance. If this is correct then in seeking an answer to our review question we need to explore how multiple mechanisms work simultaneously, if not always harmoniously, at different levels to promote change in environments subject to many other pressures. To paraphrase Bourne et al. ,187 we need to reassess our review question and ask ‘under what circumstances does engagement in research positively impact on organisational performance?’.
This is not a simple question. Best and Holmes201 have described how our understanding of knowledge management and research implementation has moved from a linear approach in which knowledge is a product of research that passes from researchers to users by diffusion and can be generalised across contexts, to a relationship model in which knowledge from multiple sources is exchanged between researchers and end-users through interpersonal relations and focused networks that involve a collaborative production–synthesis–integration cycle. In this second model, knowledge is context dependent and must be adapted to local circumstances, and its degree of use is a function of effective relationships and processes. The third and final model is a systems model in which knowledge can be both explicit and tacit, and is generated, assessed and used within specific contexts and cultures dominated by different priorities. This knowledge cycle is mediated throughout by relationships and must be understood from a systems perspective, in the context of the organisation and its strategic purposes. The degree of use is a function of effective integration with the organisation and its systems.
When exploring organisational engagement in research, it is this complexity that needs to be explored. But this is difficult, which explains why we found only three empirical studies on the organisational mechanisms through which research engagement improves performance in our focused review. It is not easy to disaggregate the impact of this particular set of mechanisms from that of other change mechanisms within health-care organisations. To do so requires sophisticated evaluation methodologies. To take one example: the transformation of the VHA and the development of its QUERI programme have generated a huge literature, and these papers, taken together, provide a compelling narrative about the role that research engagement, among other factors, played in that transformation. However, among all the papers we reviewed we found only one controlled study (by Chaney et al. 126) of an organisational intervention that had used research engagement as a mechanism to promote change. This was a cluster RCT of an evidence-based quality improvement project in which clinical managers worked with researchers to make use of prior evidence on effective care models, taking account of local contexts. Chaney et al. 126 report some improvement in some of the processes of care, and this paper therefore adds to the accumulating evidence that research engagement is one of a number of factors that improve health-care performance. However, their description of the difficulties involved in this evaluation is salutary, and indicates that attempts to enlarge this evidence base and distinguish the precise contribution of research engagement will need to overcome complex methodological hurdles. Francis and Perlin139 make a similar comment about the VHA, as do Nutley et al. 202 in a discussion of systems based research use, and Krzyzanowska et al. ,95 who note that: ‘Broadly, at the heart of understanding the relationship between research and its impact, whether through infrastructure, processes of care or other areas such as knowledge generation, lies complex adaptive systems theory’ (p. vii11).
The use of organisational mechanisms
Finally, we turn to the use of the mechanisms through which research engagement at organisational level operates (research networks, QI projects, performance management and measurement systems, etc.) and to the need to consider local contexts.
The implementation of the VHA performance contract was deliberately designed to allow regional and local directors significant autonomy, and considerable latitude was given on how to reach performance targets at the local level. 139 This emphasis on devolved decision-making runs right through accounts of the VHA. Francis and Perlin139 talk about the development of clinical guidelines and performance measures being as much a bottom-up as top-down activity within the VHA, ‘belying the common misconception that, as a federal health system with many connections to the military, change occurs through “command and control”’ (p. 65). Kizer and Kirsh196 suggest that it was a combination of ‘VA's compelling mission, the clarity of performance expectations, healthy intra-organisational competition, substantial local autonomy and a sense of professional fulfilment’ (p. 396) that drove rapid improvement. They also point out that in recent years things have changed: a more centralist approach is being taken in the VHA and this has undermined the flexibility to tailor local improvement strategies. 196
In a similar vein, another US study, by Bernard et al. ,197 emphasises the importance of a comprehensive and inclusive approach in which all of the health-care team are involved at all stages in decision-making in a way that gives them ownership of health and disease management programmes in an academic health system. In the UK, the way in which the CLAHRCs have been developed as local collaboratives that involve local communities of practice also reflects this thinking. The CLAHRCs are committed to developing strong interorganisational and interpersonal relations and promoting effective two-way collaboration, not just token involvement. NHS managers and clinicians as well as patients and the wider public are encouraged to play an active role in all stages of the research cycle from question generation and formulation through to evaluation and implementation of research findings.
This widespread engagement has built-in implications for the type of research pursued. What is sought is research that answers to specific concerns, and this (deliberately) results in a programme of research driven by the needs of the health-care system and its users – needs-based action research. However, this requirement can lead to tension between managers and clinicians in health-care organisations seeking speedy answers to local problems (relevance) and academics seeking to produce generalisable research (rigour). These difficulties are not insurmountable: ‘The VHA experience demonstrates that both are possible and that benefits can be translated in a timely manner when there is active dialogue among clinicians, researchers and policy-makers’ (p. 69). 139 However, alleviating them requires an ability to balance competing concerns, sufficient culture change on both sides to promote better understanding, and supportive external structures. In the USA, the VHA and the other health systems cited above largely got this right, and were helped by a university system that is generally more supportive of extracurricular activities outside formal academic research than the UK system, although here the new impact category in the Research Excellence Framework may help.
In summary, the literature tells us that at an organisational level the mechanisms through which research engagement promotes performance improvement are, and need to be seen as, one facet in a wider, multipronged change strategy. These mechanisms operate in complex systems, and exploring and evaluating how they operate is not straightforward. Although there is little definitive empirical evidence from single studies, the wider literature does provide examples of health-care systems in which the research function has been fully integrated into the organisational structure. These accounts indicate that this integration worked well, promoting a virtuous cycle in which research of greater relevance to the needs of those served by the system is identified, pursued and implemented, and has led to improved performance.
Chapter 7 Discussion and conclusions
Developing appropriate methods for data collection and analysis
This review took an innovative form because of the research topic (largely unexplored, but potentially important) and the disparate nature of the evidence that was identified. A review by Clarke and Loudon,31 published after our proposal had been submitted, was also influential because it suggested that the evidence base to support the widely held assumption that research engagement improves health-care performance was less strong than some had thought. We therefore felt it important to see if (as our initial scoping exercise indicated) we could identify, and analyse, a larger body of evidence through conducting a focused review that would involve a more recent search than Clarke and Loudon's in 2009,31 and would have a somewhat wider scope.
By itself, however, such a focused review seemed unlikely to enable us fully to address the request in the ITT for an evidence synthesis ‘which maps and explores the likely or plausible mechanisms through which research engagement might bear on wider performance at clinician, team, service or organisational levels’. 1 For example, both our original scoping exercise and advice from an expert suggested we should consider the way in which some health-care organisations support research engagement as part of an overall strategy to improve the quality of the health care they provide, and it was clear that important evidence on such issues came from descriptive articles that would not meet the inclusion criteria for the focused review.
We responded to these challenges by developing what we have called an hourglass review. This consisted of three stages:
-
A broad scoping or mapping stage in which we explored a large number of bodies of literature that we thought might be relevant to the review topic.
-
A focused (or formal) review on the narrower (though still large) core question of whether or not research engagement improves health care. For this review we searched extensively, but applied quite tight inclusion criteria.
-
Finally, a wider (but less systematic) review of relevant papers that we had identified during the two earlier stages (plus continuing snowballing). We therefore included papers that did not meet our inclusion criteria for the focused review, but nevertheless had something important to say, in particular about the mechanisms through which research engagement might improve performance and about the role of organisations.
The hourglass shape refers to the scope of the analysis at each stage, and to the number of papers considered in detail; in terms of the sheer volume of titles and abstracts processed, the throughput of the review was greatest in the second (or middle) stage.
The matrix
It became clear at the mapping stage that no neat patterns were emerging from the literature. Therefore, we developed a matrix to help us characterise the circumstances in which research engagement might improve health-care performance and the mechanisms that might be at work.
We identified two main dimensions along which to categorise our research on research studies: the degree of intentionality of the link between research engagement and performance and the scope of the impact on health care. (These are explained in detail in Chapter 2 and illustrated in Table 1.)
The three overlapping categories of intentionality we used in our analysis were:
-
least intentionality – where the improved health-care performance resulting from engagement in research arises as a by-product of a trial or other research which has been conducted to test a specific therapy or approach
-
a middle category – where structures such as research networks promote the conduct of research on specific therapies in a particular field but can also have the wider remit of informing network members about research more generally and addressing their identified research needs
-
greatest intentionality – where various interventions such as collaborative approaches, participatory and action research, and organisational approaches explicitly seek to produce improvements in health-care performance as a direct consequence of research engagement.
The two categories related to the scope of the impact were:
-
broader impact – those who have engaged in research becoming more willing and/or able (because of individual or unit capacities) to provide evidence-based care that is not related to the specific findings of the research on which they were engaged
-
specific impact – those who have engaged in research becoming more willing and/or able (because of increased knowledge/understanding of the findings and/or infrastructure related to the procedures, etc.) to provide evidence-based care that is related to the specific findings or processes of the research on which they were engaged.
Emerging themes about the role of research engagement
-
Systematic analysis of the data related to research engagement seems to be in its infancy, despite the widely help assumptions about the benefits of research engagement.
-
In our focused review, we were able to identify more evidence than had previously been collated on the question of whether or not research engagement improves health-care performance (see Chapter 4).
-
We identified a somewhat more substantial body of evidence than Clarke and Loudon31 because the scope of our review was wider (e.g. not limited to trials) and we conducted it at a later date. We finished with 33 papers97–129 in our focused review, including 1297–102,104,105,108–111 of the 13 that Clarke and Loudon had identified.
-
Twenty-eight papers97,99,101–105,107,108,110–126,128,129 were positive (of which six105,112,115,120,126,129 were positive/mixed) in relation to the key question of whether or not research engagement improves health-care performance. Just five papers98,100,106,109,127 were negative (of which two100,127 were mixed/negative).
-
The pattern of our findings is similar to that of Clarke and Loudon. 31 Only a minority of the positive cases (seven102,105,107,110,114,116,129 out of 28) reported some improvement in health outcomes. For the rest, the improved care took the form of improved (usually more evidence-based) processes of care.
-
We identified 21 papers97–117 in our ‘by-product’ category, including all 1297–102,104,105,108–111 of the papers from Clarke and Loudon31 that we included in our review. Of these we classified 1797,99,101–105,107,108,110–117 as positive, or positive/mixed. By adding papers that we classified as being in the research network and the intervention categories, we increased the total number of papers by a further 12. 118–129 The eight included network papers118–125 were classified as being positive, or positive/mixed; as were three126,128,129 of the four126–129 intervention studies. These papers are of considerable importance in enabling us to go further than previous evidence in suggesting research evidence does improve health-care performance.
-
Of the 2897,99,101–105,107,108,110–126,128,129 positive or mixed/positive studies, 1399,102,103,108,110,113–119,122 describe broader impact and 1597,101,104,105,107,111,112,120,121,123–126,128,129 describe specific impact. Within this overall balance 1099,102,103,108,110,113–117 out of 1797,99,101–105,107,108,110–117 of the by-product studies showed broader impact (as would fit with the absorptive capacity model), and all the intervention studies showed specific impact (as would fit a key element of the collaborative and action research theories).
-
We also considered the levels of research engagement. Clarke and Loudon31 had examined two levels: practitioners and institutions. In practice we found it very difficult to be more discerning than this because there is little consensus about the reporting terms used and we could not readily apply the separate categories of team, service and organisation. Therefore, we followed Clarke and Loudon31 in using a twofold distinction between clinicians and institutions. Twenty-two studies97–99,102–105,107,109,110,114–116,118–122,126–129 were at the institutional level, and 11100,101,106,108,111–113,117,123–125 at the clinician level. Of the 2897,99,101–105,107,108,110–126,128,129 positive and mixed/positive papers, 1997,99,102–105,107,110,114–116,118–122,126,128,129 were institutional and nine101,108,111–113,117,123–125 clinician level. Of the five98,100,106,109,127 negative and mixed/negative papers, three98,109,127 were institutional and two100,106 clinician level.
-
-
Analysing the mechanisms (see Chapter 5).
-
For this analysis we drew on the focused review and the wider review, and applied the matrix.
-
The positive papers in the focused review describe a diverse variety of mechanisms through which research engagement might improve health care. We have more evidence than previously collated (and the wider review added even more), but we have also shown that there are many circumstances and mechanisms at work, that more than one mechanism is often operative, and that the evidence available for each one is limited.
-
The wider review allowed a fuller data collection, and the matrix facilitated a more nuanced analysis of the various roles of research engagement.
-
Using the by-product and network categories from the matrix we can reasonably suggest that when clinicians and health-care organisations engage in research there is the likelihood of a positive impact on the health-care performance, even when that has not been the primary aim of the research. This finding is further supported by evidence from the wider review, which provides examples of studies that illustrate some progress along the pathway from research engagement to improved health care. However, the wider review also indicated situations in which the impacts seemed less likely to arise, and the operation of networks proved not always easy to sustain.
-
There were only four papers in the intervention category in the focused review, but a much higher number in the wider review. The latter demonstrate how collaborative or action research can encourage some progress along the pathway from research engagement towards improved health-care performance. However, various studies also demonstrated that there can be considerable obstacles to overcome, particularly with one-off initiatives.
-
-
Organisations that have incorporated research engagement as part of a comprehensive approach to health-care improvement demonstrate how research engagement can be most effective (see Chapter 6).
-
Health-care organisations and systems provide the context within which research engagement operates at other levels. And organisations in which the research function is fully integrated into the organisational structure can out-perform other organisations that pay less formal heed to research and its outputs. 76,189–193 However, at the organisational level, as at other levels, engagement in research operates through a variety of mechanisms and is only one of many influences on performance. Disaggregating how these mechanisms operate in complex systems is not straightforward.
-
Widespread engagement in research by all the key groups of relevant stakeholders often influences the type of research pursued. What is sought in these circumstances is research that answers specific concerns, and this (deliberately) results in a programme of research driven by the needs of the health-care system and its users.
-
-
There are various initiatives aimed at deliberately increasing research engagement with the explicit aim of improving health-care performance.
-
In the USA, these initiatives include the 2003 Roadmap for Medical Research from the NIH149 in which resources have been devoted to boost CBPPR through the Clinical and Translation Science Awards. The NIH has funded a large-scale project aimed at identifying how this initiative might best operate. 131,147,148
-
In the UK, the NHS Commissioning Board has a duty to promote research and innovation: ‘The NHS Commissioning Board's objective is to ensure that the new commissioning system promotes and supports participation by NHS organisations and NHS patients in research funded by both commercial and non-commercial organisations, most importantly to improve patient outcomes, but also to contribute to economic growth’. 204 In addition, the goal of the Academic Health Science Networks is to improve patient and population health outcomes by translating research into practice and developing and implementing integrated health-care systems. 205
-
Various studies have indicated that new structures such as research networks and schemes aimed at involving individual health-care professionals more fully in research can face difficulties in making progress, particularly if there are not changes at the organisational level to support the initiatives. 127,169
-
The moves towards trials and other well-found research taking place within networks, and as part of wider interventions, means that increasingly research engagement leading to improved health-care performance shifts from being a by-product to an intended outcome.
-
Pulling together the evidence related to mechanisms through which health care might be improved in research-active settings
The majority of the papers in the focused review were by-product papers, and the majority of these were positive about the relation between research engagement and improved performance. Various potential mechanisms were suggested, including:
-
a specific impact, at individual level, through the increased awareness and understanding of specific findings arising from research participation
-
a specific impact, at institutional level, through an increased use of the protocol and processes associated with a specific research study
-
a broader impact, at individual level, through changes in attitudes to research and resulting behaviour
-
a broader impact, at institutional level, through infrastructure development associated with research participation.
We also saw that the mechanisms associated with research networks (the second largest group of papers in the focused review) represent a partial formalisation and use on a regular basis (through the provision of more effective collaboration and more supportive contexts) of some of these potential mechanisms. Finally, we saw how this partial formalisation had been taken still further in interventions deliberately designed to integrate the research function into organisational structures (as described in the intervention papers in the focused review and more extensively covered in the wider review).
Building on these findings, on the theoretical approaches discussed in Chapter 2, and on previous work on potential mechanisms by Krzyzanowska et al. ,95 we can start to develop a taxonomy of the various mechanisms and submechanisms through which outcomes may be improved in research-active settings (Box 4).
1. Absorptive capacity: this is most relevant for wider adoption of research in institutions:
Changes in the structure of institutions – improvements in infrastructure
-
Attributes of the setting in which care is delivered, such as accommodation, equipment and personnel, which are brought in to perform research-related activities and may remain in place after the research is completed, for example in the UK the capital spend on the Biomedical Research Centres which resulted in new co-located research and clinical facilities
Changes in human capital
-
Training/updating staff through research engagement leading to the acquisition and use of new skills, other gains in knowledge and changes in attitudes towards research and research findings
-
Enhancement of group and individual behaviour including:
-
more rapid uptake of new treatments and greater likelihood of following clinical guidelines
-
improved collaboration, establishment of expert teams, etc.
-
2. Improvements in the processes of care related to conducting a specific trial
-
A more rigorous process of defining the standard of care for patients irrespective of their inclusion in the trial
-
More close monitoring and support
-
Early access to novel technologies
3. Organisational mechanisms within health-care systems
-
For example, in the VA, where the whole organisation uses research to improve health care in various and integrated ways, including conducting research to address known issues in the health-care system, allowing physicians time to conduct research and thus being an attractive organisation to work for, conducting research to identify best performance targets to set, using research in QI, etc.
4. Collaborative approaches between organisations, teams and individuals as a mechanism
-
Linkage and exchange that improves the relevance of research and policy-makers'/managers'/clinicians' willingness to use it
-
Academic Health Science Centres, teaching/research hospitals
-
Research networks as an increasingly important mechanism
5. Action research and participatory research as mechanisms that improve relevance, understanding of research and willingness to use research
Limitations and robustness of the conclusions
This evidence synthesis could be described as research on research on research: it is an evidence synthesis of studies that assess the impact of research engagement. Conducting reviews of this nature is a complex task and it is not surprising that there are various potential limitations to this study. The first relates to locating the relevant literature. At the outset we anticipated that there would be many bodies of literature addressing the review question. We used the initial mapping stage to explore the potential locations of relevant literature and our findings informed the search strategy for the focused review. The search terms remained quite broad in order to cast the net as wide as possible and catch papers published in different fields, journals and countries. As a consequence, a significant amount of time within the review process was spent on mapping, refining the question and developing search terms. Owing to the nature of the research question, the search terms included a number of generic and widely used terms including ‘research’ and ‘engagement’. We sought expert advice and worked with an information scientist to develop the search strategy and conduct a search that was sufficiently sensitive to identify relevant papers. Despite the difficulty of the task, we were reassured that we identified nine of the 13 papers in the previous review by Clarke and Loudon31 using the search terms (and the others were added by snowballing).
In order to make the search feasible, the review focused on engagement in research, rather than using the wider definition of research engagement that would have included engagement with research. The focused review shares the limitations of many systematic reviews. In pursuing a narrow question, we inevitably excluded large volumes of potentially interesting, relevant research that did not meet the inclusion criteria. As with other reviews, we have also had to exclude a number of papers owing to the lack of information provided by the authors about key elements of the study (such as design and outcomes). We have tried to address these limitations by conducting a wider additional synthesis (the wider review) to help explain the findings of the focused review and give the final review more explanatory power.
In Chapter 4, we discussed concerns that arose in our analysis of two studies in the focused review about the extent to which engagement in small individual studies could improve performance,100,106 compared with relying on reviews of all the studies in a field. We suggest this depends on the context, in this case the existing knowledge base, which provides the basis for judgements about whether or not a study is well-found and whether its findings are supported or challenged by other studies in the same field. One of the limitations of our review is that it was not always possible to make these judgements.
Finally, another common limitation reflects the reliance of reviewers on what is already published in the literature. This meant that the review was only able to capture very recent developments (such as the outcomes of UK research networks) through an abstract. The section of the focused review on networks draws exclusively on US studies of research networks. This reflects the more established nature of research networks in the USA, and also an approach to evaluation that is consistent with the rigorous inclusion criteria used for the focused review. Linked to this is another challenge common to other systematic reviews: the impact of publication bias. As with other fields, there is likely to be a bias towards the publication of studies with positive results to share with the academic community. We have sought to address this by searching the grey literature, conducting a web search and writing to some key authors in the field to identify unpublished literature. We have also kept this potential limitation in mind in conducting the analysis. Furthermore, in at least some of the studies it is complicated to interpret the findings in terms of the issues we were considering.
Methodological developments
Given the topic and our interest in both realist synthesis and narrative synthesis we sought expert advice from the advisory group. We were advised that our review question was likely to contain too many diverse aspects to conduct a meaningful realist synthesis. We drew on narrative synthesis methods at various stages, including the mapping stage, but felt the need to develop our own approach to adequately address the review question.
Despite the limitations described above, we found the hourglass review approach had various advantages. It allowed us the opportunity to start broadly, which was useful when it was difficult to locate the examples of the phenomenon we were seeking. The hourglass approach was also useful because the detailed mapping allowed us to explore issues from many angles. It also meant that we could then conduct a more focused, or formal, review on the central question of whether or not research engagement improves health care. This was particularly desirable in the context of the review published after the submission of our proposal31 which concluded that the evidence for suggesting research engagement improves health-care performance was not as strong as seemed to be widely assumed.
Finally, the wider review allowed us to collate a wider range of evidence and more fully address a range of issues related to the mechanisms through which research engagement might improve health-care performance. In relation to networks, it allowed us to bring in examples from beyond the USA, and also brought in some examples of problems with networks to counteract the examples in the focused review, which were all positive. In relation to issues of collaborative research and the organisational dimension, the wider review allowed us to consider a wide range of papers that did not meet the inclusion criteria adopted for the focused review.
One of the main reasons for developing the hourglass review methodology was to allow for a final, wider review stage and, as is demonstrated above, it was this less formal approach that enabled us to draw on a more diverse range of literature.
So, the approach we developed allowed considerable progress to be made, but was very resource intensive.
Discussing future research needs and the scope for developing performance indicators
Clarke and Loudon31 recommend that RCTs should be conducted in this area in an attempt to generate stronger evidence, but acknowledged the considerable difficulties involved. Pater et al. 94 analyse in considerable detail how clinical trials and other well-found research studies in this field could be designed, but the focus of their proposed question was narrower than the one we addressed in our review. They were asking whether or not patients who receive care in a research environment derive benefits that are attributable to that general research activity, but not simply because these patients were exposed to a successful intervention as patients in a research project. They state: ‘this question has been deliberately phrased to indicate our concern with benefits that occur contemporaneously with the research activity’ (p. vii58).
In conducting our review, we identified several areas where there appeared to be less evidence available than we might have expected to find. These included empirical studies related to the role of medical academics. In relation to research networks there seemed to be fewer studies from the UK than from the USA on how the benefits of research engagement might arise. There might be some problems in replicating exactly the type of studies identified in the focused review because there has now been such a growth of research networks in the UK; nevertheless, that only serves to highlight the importance of this area.
As explained in Chapter 1, our review focused primarily on the issues of performance in relation to the quality agenda. We did, however, identify one study that undertook a data envelopment analysis comparing the performance of academic medical centres with other hospitals in Germany, and identified an ‘emphasis on research’ as a factor that increased efficiency. 206 This study compared the level of research undertaken with other factors, but failed to meet the inclusion criteria for our focused review because the data on health-care performance related to the volume of work and not the quality of care. Nevertheless, this is an approach on which it might be worth conducting a scoping exercise in the UK.
The issue of research engagement as a factor in performance improvement has, largely, not previously been systematically researched as a topic in its own right. The studies we identified have mainly been one-off studies and/or considerations of the issue as part of a wider study. Therefore, we believe there would be scope for deliberately considering aspects such as the potentially negative impacts of research engagement alongside benefits. Organisation-wide interventions designed to promote research engagement also require further research. However, there are significant methodological challenges in conducting evaluations of these complex interventions and there is a need for methodological development to improve evaluations of how different mechanisms operate in complex systems (as we suggest in Chapters 5 and 6). There is also a role for social theory in developing and understanding the role of research engagement in promoting health-care improvement. Therefore, overall we believe there would be scope for a programme of work in this field that identified the various roles that could be played by a range of different methods that would be appropriate for the various topics identified. Such methods could include observational studies of research engagement within organisations, applications of social theory, and the use of large data sets.
Through the Quality Accounts, NHS trusts have reported on the number of patients recruited annually to participate in research approved by local research ethics committees. This is a useful development, but given the importance that is now being attached to research there could be value in monitoring key issues such as the relevance of the research conducted, and establishing whether or not this research activity is producing beneficial change. Experience suggests, however, that it is difficult to capture such items, especially through the small number of indicators that it is likely to be practical to introduce. This is an issue that requires careful analysis. Furthermore, one of the key elements of the VA's use of research engagement was to identify the most appropriate performance indicators (PIs) to apply within the organisation to show improvements in the processes of care.
Overall, a key role for PIs could be to encourage research participation and a culture of maximising the benefits from such engagement. Therefore, some illustrative examples of the approach towards indicators that might be used by NHS trusts and Clinical Commissioning Groups are provided below.
-
An indicator for research activity approved by local research ethics committees could continue along the lines already collected through the Quality Accounts.
-
An indicator could be developed to explore the benefits that arise as by-products from research engagement, while recognising the difficulties that arise in doing this. In the guidance for such an indicator the following items could be listed as ones that might be explored:
-
changes in care settings (e.g. additional accommodation, equipment or personnel brought in as part of a research project or programme and remaining in place afterwards)
-
changes in human capital (e.g. training or updating staff through research engagement), or changes in team or individual behaviour (including more rapid uptake of evidence-based treatments)
-
changes in processes of care (e.g. improvements in following clinical guidelines, improvements in monitoring and support, early access to new technologies).
-
-
An indicator of the percentage of eligible staff who have participated in research studies involving NHS/NIHR-wide research networks might be useful.
-
There might also be scope for an indicator to demonstrate the extent to which the Trust or Commissioning Group has arrangements in place to work with staff and patients to identify specific problems within a health-care system/organisation and generate and implement research-based solutions, thinking strategically as well as in the short term.
Conclusions
The focused review identified more evidence (33 papers97–129) than has previously been collated on the question of whether or not research engagement improves health-care performance. This evidence was broadly positive, with 28 papers97,99,101–105,107,108,110–126,128,129 drawing positive conclusions about the impact of research engagement (with six105,112,115,120,126,129 of these being positive/mixed). Just five papers98,100,106,109,127 were negative (of which two100,127 were negative/mixed). Drawing on the focused review (especially the by-product and network categories from the matrix) we can reasonably suggest, therefore, that when clinicians and health-care organisations engage in research there is the likelihood of a positive impact on health-care performance, but this is more likely to be on improved health-care processes than improved patient outcomes.
The wider review added additional evidence. In particular, although there were only four papers from the focused review in the intervention category of the matrix (covering interventions such as collaborative and action research), a much higher number from the wider review contribute to our analysis of the mechanisms and, for example, demonstrate how research engagement can encourage some progress along the pathway towards improved health-care performance.
Overall, we have also seen that there are many circumstances and mechanisms at work, that more than one mechanism is often operative, and that the evidence available for each one is limited. This limits the immediate implications for practice. The large-scale initiatives now being developed to encourage research engagement as a means of improving performance may also highlight the need for more research on research engagement and for adequate PIs of research engagement and of research quality in NHS trusts, building on what has previously been sought through the Quality Accounts and on the suggestions set out above. However, our findings do tell us that the mechanisms through which research engagement might improve performance overlap and rarely act in isolation, and that their effectiveness often depends on the context in which they operate. Generally, at lower levels of intentionality (where improved health-care performance is a by-product of a research study) we identified a series of one-off studies in which a diversity of detailed mechanisms were applied. At higher levels of intentionality (e.g. networks and collaborations) there had been more formalisation of potential mechanisms and research processes themselves had become an increasingly important means through which research engagement can improve health-care performance.
The number of research networks is growing, and these new structures continue to develop and evolve. The contribution of collaborative approaches to research is also developing. At an organisational level, the mechanisms through which research engagement promotes performance improvement are often only one facet within a wider, multipronged change strategy. Organisations that have deliberately integrated the research function into organisational structures demonstrate how research engagement can, among other factors, contribute to improved health-care performance.
Recommendations for research
-
Further explorations of how particular mechanisms promote research engagement. Evaluations of research networks and of existing schemes to promote the engagement of clinicians and managers in research would be particularly valuable.
-
Detailed observational research focusing on research engagement within organisations, to build understanding of mechanisms, and to explore potentially negative impacts of research engagement alongside benefits.
-
Analysis of organisation-wide interventions designed to promote research engagement also requires further research. There are significant methodological challenges in conducting evaluations of these complex interventions and a need for methodological development to improve evaluations of how different mechanisms operate in complex systems.
-
Scoping exercises to identify possibilities of using large databases of research production and hospital performance.
-
The application of social theory in developing and understanding the role of research engagement in promoting health care improvement.
Acknowledgements
Contributions of authors
Professor Steve Hanney (Professorial Research Fellow, Health Services Research) led the project and was involved in all stages of the project and in the preparation of the report.
Dr Annette Boaz (Reader, Health Care Research) was involved in all stages of the project and in the preparation of the report.
Teresa Jones (Research Fellow, Information Science) was particularly involved in co-ordinating the review of papers, and contributed to all phases of the project and preparation of the report.
Professor Bryony Soper (Honorary Professor, Health Services Research) was involved in all stages of the project and in the preparation of the report.
We thank several colleagues for their contributions. Dr Claire Donovan (Reader, Health Economics Research Group, Brunel University) was a co-applicant on the original proposal, but very shortly after the start of the project was seconded full-time to a government department for most of the remaining duration of the project. She made an important contribution to the initial phase of the work. Sarah Lawson (Senior Information Scientist, King's College London) advised the team on the scope and structure of the literature searches. Sarah Jeal conducted the database searches for the focused review. Eszter Nagy provided expert secretarial assistance and was involved in the documenting of the report.
We are most grateful to the advisory group who provided expert guidance: Professor Martin Buxton, Professor Ewan Ferlie, Professor Cy Frank, Dr Tracey Koehlmoos, Professor Stuart Logan, Professor Russell Mannion, Dr Fiona Moss, Professor Ray Pawson, Professor Trevor Sheldon, Professor Ben Martin and Professor Tikki Pang. The group included two patient representatives who were consulted at various stages throughout the project: Christine Gratus and Dr Charlotte Williamson, OBE.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health.
References
- National Institutes of Health Research Service Delivery and Organisation . (10 1012). Call for Proposals: Expedited Evidence Synthesis to Support 2010. www.netscc.ac.uk/hsdr/files/project/SDO_CB_10-1012-09_V01.pdf (accessed 8 August 2013).
- Equity and excellence: liberating the NHS. London: DoH; 2010.
- NHS Confederation . Being a good research partner: the virtues and rewards. Briefing 2010. www.nhsconfed.org/Publications/Documents/HSRN_briefing_270.pdf (accessed 8 August 2013).
- Association of UK University Hospitals . University Hospitals Awarded Top Scores in Annual Care Quality Commission Report 2009. www.aukuh.org.uk/index.php/news-and-consultations/news-archive/28-university-hospitals-awarded-top-scores-in-annual-cqc-report (accessed 8 August 2013).
- Best research for best health: a new National Health Research Strategy – the NHS contribution to health research in England. London: DoH Research and Development Directorate; 2006.
- Freeney Y, Tiernan J. Employee engagement: an overview of the literature on the proposed antithesis to burnout. Ir J Psychol 2006;27:130-41. http://dx.doi.org/10.1080/03033910.2006.10446236.
- Clark J, Spurgeon P, Hamilton P. Medical professionalism: leadership competency an essential ingredient. Int J Clin Leadership 2008;16:3-9.
- Bakker AB, Burke RJ, Cooper CL. The peak performing organization. Oxon: Routledge; 2009.
- Spurgeon P, Mazelan PM, Barwell F. Medical engagement: a crucial underpinning to organizational performance. Health Serv Manage Res 2011;24:114-20. http://dx.doi.org/10.1258/hsmr.2011.011006.
- Ferlie EB, Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q 2001;79:281-315. http://dx.doi.org/10.1111/1468-0009.00206.
- Blevins D, Farmer MS, Edlund C, Sullivan G, Kirchner JE. Collaborative research between clinicians and researchers: a multiple case study of implementation. Implementation Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-76.
- MacLeod D, Clarke N. Engaging for success: enhancing performance through employee engagement. London: Department for Business, Innovation and Skills; 2009.
- Clark J. Medical engagement. Too important to be left to chance. London: The King's Fund; 2012.
- Reinertsen J, Gosfield A, Rupp W, Whittington J. Engaging physicians in a shared quality agenda. Cambridge, MA: Institute for Healthcare Improvement; 2007.
- Dewar S, Levenson R, Shepherd S. Understanding doctors: harnessing professionalism. London: King's Fund; 2008.
- Tooke J. Aspiring to excellence – final report of the independent enquiry into modernising medical careers. London: MMC Inquiry; 2008.
- Mechanic D. Rethinking medical professionalism: the role of information technology and practice Innovations. Milbank Q 2008;86:327-58. http://dx.doi.org/10.1111/j.1468-0009.2008.00523.x.
- Goddard M, Davies H, Dawson D, Mannion R, McInnes F. Clinical performance measurement: part 1 – getting the best out of it. J R Soc Med 2002;95:508-10. http://dx.doi.org/10.1258/jrsm.95.10.508.
- Goddard M, Mannion R. The role of horizontal and vertical approaches to performance measurement and improvement in the UK public sector. Publ Perform Manag Rev 2004;28:75-9.
- Scott T, Mannion R, Marshall M, Davies H. Does organisational culture influence health care performance? A review of the evidence. J Health Serv Res Policy 2003;8:105-17. http://dx.doi.org/10.1258/135581903321466085.
- Mannion R, Davies H, Harrrison S. Changing management cultures and organisational performance in the NHS (OC2). Norwich: Queen’s Printer and Controller of HMSO; 2010.
- Mannion R, Davies H, Marshall M. Impact of star performance ratings in English acute hospital trusts. J Health Serv Res Policy 2005;10:18-24. http://dx.doi.org/10.1258/1355819052801877.
- Smee C. ‘Measuring up’ improving health system performance in OECD countries. Improving value for money in the United Kingdom National Health Service: performance measurement and improvement in a centralised system. Paris: OECD Publishing; 2002.
- Bamford D, Chatziaslan E. Healthcare capacity measurement. Int J Productivity Performance Manag 2009;58:748-66. http://dx.doi.org/10.1108/17410400911000390.
- McDermott C, Stock G. Hospital operations and length of stay performance. Int J Oper Prod Man 2007;27:1020-42. http://dx.doi.org/10.1108/01443570710775847.
- Darzi A. High quality care for all: NHS next stage review final report. Cm7432. Norwich: The Stationery Office; 2008.
- Pettigrew A, Brignal T, Harvey J, Webb D. The determinants of organisational performance: a review of the literature. Supporting paper for the Bristol Inquiry. Coventry: Warwick Business School, Warwick University; 1999.
- Astbury B, Leeuw F. Unpacking black boxes: mechanisms and theory building in evaluation. Am J Eval 2010;31:363-81. http://dx.doi.org/10.1177/1098214010371972.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organisations: systematic review and recommendations. Milbank Q 2004;82:581-629.
- Greenhalgh T, Robert G, Bate P, Macfarlane F, Kyriakidou O. Diffusion of innovations in health service organisations. A systematic literature review. Oxford: Blackwell Publishing; 2005.
- Clarke M, Loudon K. Effects on patients of their healthcare practitioner's or institution's participation in clinical trials: a systematic review. Trials 2011;12. http://dx.doi.org/10.1186/1745-6215-12-16.
- Caron-Flinterman JF, Broerse JE, Bunders JF. The experiential knowledge of patients: a new resource for biomedical research?. Soc Sci Med 2005;60:2575-84. http://dx.doi.org/10.1016/j.socscimed.2004.11.023.
- Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess 2007;11.
- Wooding S, Hanney S, Buxton M, Grant J. Payback arising from research funding: evaluation of the Arthritis Research Campaign. Rheumatology 2005;44:1145-56. http://dx.doi.org/10.1093/rheumatology/keh708.
- Nason E, Curran B, Hanney S, Janta B, Hastings G, O’Driscoll M, et al. Evaluating health research funding in Ireland: assessing the impacts of the Health Research Board of Ireland’s funding activities. Res Evaluat 2011;20:193-200. http://dx.doi.org/10.3152/095820211X12941371876823.
- Hanney SR, Watt A, Jones TH, Metcalf L. Conducting retrospective impact analysis to inform a medical research charity's funding strategies: the case of Asthma UK. All Asth Clin Immun 2013;9. http://dx.doi.org/10.1186/1710-1492-9-17.
- Cohen WM, Levinthal DA. Innovation and learning: the two faces of R&D. Economic J 1989;99:569-96. http://dx.doi.org/10.2307/2233763.
- Cohen WM, Levinthal DA. Absorptive capacity: a new perspective on learning and innovation. Admin Sci Quart 1990;35:128-52. http://dx.doi.org/10.2307/2393553.
- Rosenburg N. Why do firms do basic research (with their own money)?. Res Policy 1990;19:165-74. http://dx.doi.org/10.1016/0048-7333(90)90046-9.
- Freeman C, Soete L. The economics of industrial innovation. Cambridge, MA: MIT Press; 2000.
- Zahra SA, George G. Absorptive capacity: a review, reconceptualization, and extension. Acad Manage Rev 2002;27:185-203. http://dx.doi.org/10.5465/AMR.2002.6587995.
- Lane PJ, Koka BR, Pathak S. The reification of absorptive capacity: a critical review and rejuvenation of the construct. Acad Manage Rev 2006;31:833-63. http://dx.doi.org/10.5465/AMR.2006.22527456.
- Knudsen H, Roman P. Modelling the use of innovations in private treatment organizations: the role of absorptive capacity. J Subst Abuse Treat 2004;26:353-61. http://dx.doi.org/10.1016/S0740-5472(03)00158-2.
- Buxton M, Hanney S. How can payback from health services research be assessed?. J Health Serv Res Policy 1996;1:35-43.
- Buxton M, Hanney S. Evaluating the NHS Research and Development Programme: will the programme give value for money?. J Roy Soc Med 1998;91:2-6.
- Buxton M, Elliott R, Hanney S, Henkel M, Keen J, Sculpher MJ, et al. Assessing payback from Department of Health research and development: preliminary report – volume two: eight case studies. Uxbridge: Health Economics Research Group, Brunel University; 1994.
- Hanney S, Grant J, Wooding S, Buxton M. Proposed methods for reviewing the outcomes of research: the impact of funding by the UK's ‘Arthritis Research Campaign’. Health Res Policy Syst 2004;2.
- Frank C, Nason E. Health research: measuring the social, health and economic benefits. CMAJ 2009;180:528-34. http://dx.doi.org/10.1503/cmaj.090016.
- Making an impact: a preferred framework and indicators to measure returns on investment in health research. Ottawa, ON: CAHS; 2009.
- Pang T, Sadana R, Hanney S, Bhutta Z, Hyder A, Simon J. Knowledge for better health – a conceptual framework and foundation for health research systems. Bull World Health Organ 2003;81:815-20.
- Callon M. Is science a public good – Fifth Mullins Lecture, Virginia-Polytechnic-Institute, 23 March 1993. Sci Technol Hum Val 1994;19:395-424. http://dx.doi.org/10.1177/016224399401900401.
- Rogers E. Diffusion of innovations. New York, NY: Free Press; 2003.
- Frambach R, Schillewaert N. Organizational innovation adoption – a multi-level framework of determinants and opportunities for future research. J Bus Res 2002;55:163-76. http://dx.doi.org/10.1016/S0148-2963(00)00152-1.
- Denis J-L, Hébert Y, Langley A, Lozeau D, Trottier L-H. Explaining diffusion patterns for complex health care innovations. Health Care Manage Rev 2002;27:60-73. http://dx.doi.org/10.1097/00004010-200207000-00007.
- Ryan B, Gross N. The diffusion of hybrid seed corn in two Iowa communities. Rural Sociology 1943;8:15-24.
- Minasian LM, Carpenter WR, Weiner BJ, Anderson DE, McCaskill-Stevens W, Nelson S, et al. Translating research into evidence-based practice: the National Cancer Institute Community Clinical Oncology Program. Cancer 2010;116:4440-9. http://dx.doi.org/10.1002/cncr.25248.
- Darbyshire J, Sitzia J, Cameron D, Ford G, Littlewood S, Kaplan R, et al. Extending the clinical research network approach to all of healthcare. Ann Oncol 2011;22:vii36-vii43. http://dx.doi.org/10.1093/annonc/mdr424.
- Clinical Research Network Coordinating Centre . Our Objectives n.d. http://www.crncc.nihr.ac.uk/about_us/performance_objectives.htm (accessed 8 August 2013).
- Cameron D, Stead M, Lester N, Parmar M, Haward R, Maughan T, et al. Research-intensive cancer care in the NHS in the UK. Ann Oncol 2011;22:vii29-vii35. http://dx.doi.org/10.1093/annonc/mdr423.
- Denis J, Lomas J. Convergent evolution: the academic and policy roots of collaborative research. J Health Serv Res Policy 2003;8:1-6. http://dx.doi.org/10.1258/135581903322405108.
- Kogan M, Henkel M. Government and research: the Rothschild experiment in a government department. London: Heinemann; 1983.
- Weiss C. Using social research in public policy making. Lexington, MA: D.C. Heath; 1977.
- Lomas J. Finding audiences, changing beliefs: the structure of research use in Canadian health policy. J Health Polit Policy Law 1990;15:525-42. http://dx.doi.org/10.1215/03616878-15-3-525.
- Lomas J. Using ‘linkage and exchange’ to move research into policy at a Canadian foundation. Health Aff 2000;19:236-40. http://dx.doi.org/10.1377/hlthaff.19.3.236.
- Gibbons M, Limoges C, Nowotny H, Schwartzman S, Scott P, Trow M. The new production of knowledge. London: Sage; 1994.
- Nowotny H, Scott P, Gibbons M. Re-thinking science: knowledge and the public in an age of uncertainty. Oxford: Policy Press; 2001.
- Ferlie E, Wood M. Novel mode of knowledge production? Producers and consumers in health services research. J Health Serv Res Policy 2003;8:51-7. http://dx.doi.org/10.1258/135581903322405171.
- Moscucci O, Herring R, Berridge V. Networking health research in Britain: the post-war childhood leukaemia trials'. Twentieth Cent Br Hist 2009;20:23-52. http://dx.doi.org/10.1093/tcbh/hwn039.
- Rogers E. Diffusion of innovations. New York, NY: Free Press; 1995.
- Walshe K, Davies H. Research, influence and impact: deconstructing the norms of health services research commissioning. Policy and Society 2010;29:103-11. http://dx.doi.org/10.1016/j.polsoc.2010.03.003.
- Lewin K. Action research and minority problems. J Soc Issues 1946;2:34-46. http://dx.doi.org/10.1111/j.1540-4560.1946.tb02295.x.
- Jagosh J, Macaulay A, Pluye J, Salsberg J, Bush P, Henderson J, et al. Uncovering the benefits of participatory research: implications of a realist review for health research and practice. Milbank Q 2012;90:311-46. http://dx.doi.org/10.1111/j.1468-0009.2012.00665.x.
- Lasker R, Weiss E, Miller R. Partnership synergy: a practical framework for studying and strengthening the collaborative advantage. Milbank Q 2001;79:179-205. http://dx.doi.org/10.1111/1468-0009.00203.
- Crilly T, Jashapara A, Ferlie E. Report to NIHR SDO programme. Norwich: Queen's Printer and Controller of HMSO; 2010.
- Report of the High Level Group (HLG) on clinical effectiveness. London: DoH; 2007.
- Oliver A. The Veterans Health Administration: an American success story?. Milbank Q 2007;85:5-35. http://dx.doi.org/10.1111/j.1468-0009.2007.00475.x.
- Kizer KW, Demakis JG, Feussner JR. Reinventing VA health care: systematizing quality improvement and quality innovation. Med Care 2000;38:7-16. http://dx.doi.org/10.1097/00005650-200006001-00002.
- Graham ID, Tetroe JM. Learning from U.S. Department of Veterans Affairs Quality Enhancement Research Initiative: QUERI Series. Imp Sci 2009;4. http://dx.doi.org/10.1186/1748-5908-4-13.
- Vist GE, Bryant D, Somerville L, Birminghem T, Oxman AD. Outcomes of patients who participate in randomized controlled trials compared to similar patients receiving similar interventions who do not participate. Cochrane Database Syst Rev 2008;3. http://dx.doi.org/10.1002/14651858.MR000009.pub4.
- Stiller CA. Centralised treatment, entry to trials and survival. Br J Cancer 1994;70:352-62. http://dx.doi.org/10.1038/bjc.1994.306.
- Papanikolaou PN, Christidi GD, Ioannidis JPA. Patient outcomes with teaching versus nonteaching healthcare: a systematic review. PLOS Med 2006;3:1603-15. http://dx.doi.org/10.1371/journal.pmed.0030341.
- Ayanian J, Weissman J. Teaching hospitals and quality of care: a review of the literature. Milbank Q 2002;80:569-93. http://dx.doi.org/10.1111/1468-0009.00023.
- Landrum BA. Causal effect of ambulatory specialty care on mortality following myocardial infarction: a comparison of propensity acore and instrumental variable analyses. Health Serv Outcomes Res Methodol 2001;2:221-45.
- Gabler NB, Duan N, Vohra S, Kravitz RL. N-of-1 trials in the medical literature: a systematic review. Med Care 2011;49:761-8.
- Lavis J, Davies H, Oxman A, Denis J-L, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy 2005;10:35-48. http://dx.doi.org/10.1258/1355819054308549.
- Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy 2002;7:239-44. http://dx.doi.org/10.1258/135581902320432778.
- Soh KL, Davidson PM, Leslie G, Bin Abdul Rahman A. Action research studies in the intensive care setting: a systematic review. Int J Nurs Stud 2011;48:258-68. http://dx.doi.org/10.1016/j.ijnurstu.2010.09.014.
- Unahalekhaka A. Collaborative quality improvement reduced ventilator-associated in a limited resource country. Int J Infec Control 2008;4. http://dx.doi.org/10.3396/ijic.v4s1.020.08.
- Munn-Giddings C, McVicar A, Smith L. Systematic review of the uptake and design of action research in published nursing research, 2000–2005. J Res Nurs 2008;13:465-77. http://dx.doi.org/10.1177/1744987108090297.
- Mitton C, Adair CE, McKenzie E, Patten SB, Waye B, Adair E. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q 2007;85:729-68. http://dx.doi.org/10.1111/j.1468-0009.2007.00506.x.
- Ward V, House A, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy 2009;14:156-64. http://dx.doi.org/10.1258/jhsrp.2009.008120.
- Squires JE, Estabrooks Ca, Gustavsson P, Wallin L. Individual determinants of research utilization by nurses: a systematic review update. Implementation Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-1.
- Selby P. Summary: the impact of the process of clinical research on health service outcomes. Ann Oncol 2011;22:vii2-vii4.
- Pater J, Rochon J, Parmar M, Selby P. Future research and methodological approaches. Ann Oncol 2011;22:vii57-vii61. http://dx.doi.org/10.1093/annonc/mdr428.
- Krzyzanowska MK, Kaplan R, Sullivan R. How may clinical research improve healthcare outcomes?. Ann Oncol 2011;22:vii10-vii15. http://dx.doi.org/10.1093/annonc/mdr420.
- Health Services Research: Work Force and Educational Issues. Washington, DC: The National Academies Press; 1995.
- Adler MW. Changes in local clinical practice following an experiment in medical care: evaluation of evaluation. J Epidemiol Community Health 1978;32:143-6. http://dx.doi.org/10.1136/jech.32.2.143.
- Andersen M, Kragstrup J, Sondergaard J. How conducting a clinical trial affects physicians' guideline adherence and drug preferences. JAMA 2006;295:2759-64. http://dx.doi.org/10.1001/jama.295.23.2759.
- Chen AY, Schrag N, Hao Y, Flanders WD, Kepner J, Stewart A, et al. Changes in treatment of advanced laryngeal cancer 1985–2001. Otolaryngol Head Neck Surg 2006;135:831-7. http://dx.doi.org/10.1016/j.otohns.2006.07.012.
- Clark WF, Garg AX, Blake PG, Rock GA, Heidenheim AP, Sackett DL. Effect of awareness of a randomized controlled trial on use of experimental therapy. JAMA 2003;290:1351-5. http://dx.doi.org/10.1001/jama.290.10.1351.
- Das D, Ishaq S, Harrison R, Kosuri K, Harper E, Decaestecker J, et al. Management of Barrett's esophagus in the UK: overtreated and underbiopsied but improved by the introduction of a national randomized trial. Am J Gastroenterol 2008;103:1079-89. http://dx.doi.org/10.1111/j.1572-0241.2008.01790.x.
- du Bois A, Rochon J, Lamparter C, PFisterer J. Organkommission OVAR . Pattern of care and impact of participation in clinical studies on the outcome in ovarian cancer. Int J Gynecol Cancer 2005;15:183-91.
- Hébert-Croteau N, Brisson J, Latreille J, Blanchette C, Deschenes L. Variations in the treatment of early-stage breast cancer in Quebec between 1988 and 1994. CMAJ 1999;161:951-5.
- Janni W, Kiechle M, Sommer H, Rack B, Gauger K, Heinrigs M, et al. Study participation improves treatment strategies and individual patient care in participating centers. Anticancer Res 2006;26:3661-7. http://dx.doi.org/10.1016/S0960-9776(05)80107-9.
- Jha P, Deboer D, Sykora K, Naylor CD. Characteristics and mortality outcomes of thrombolysis trial participants and nonparticipants: a population-based comparison. J Am Coll Cardiol 1996;27:1335-42. http://dx.doi.org/10.1016/0735-1097(96)00018-6.
- Jones B, Ratzer E, Clark J, Zeren F, Haun W. Does peer-reviewed publication change the habits of surgeons?. Am J Surg 2000;180:566-9. http://dx.doi.org/10.1016/S0002-9610(00)00495-5.
- Karjalainen S, Palva I. Do treatment protocols improve end results? A study of survival of patients with multiple myeloma in Finland. BMJ 1989;299:1069-72. http://dx.doi.org/10.1136/bmj.299.6707.1069.
- Kizer JR, Cannon CP, McCabe CH, Mueller HS, Schweiger MJ, Davis VG, et al. Trends in the use of pharmacotherapies for acute myocardial infarction among physicians who design and/or implement randomized trials versus physicians in routine clinical practice: the MILIS-TIMI experience. Multicenter investigation on limitation of infarct size. Am Heart J 1999;137:79-92.
- Majumdar SR, Chang W-C, Armstrong PW. Do the investigative sites that take part in a positive clinical trial translate that evidence into practice?. Am J Med 2002;113:140-5. http://dx.doi.org/10.1016/S0002-9343(02)01166-X.
- Majumdar SR, Roe MT, Peterson ED, Chen AY, Gibler WB, Armstrong PW. Better outcomes for patients treated at hospitals that participate in clinical trials. Arch Intern Med 2008;168:657-62. http://dx.doi.org/10.1001/archinternmed.2007.124.
- Meineche-Schmidt V, Hvenegaard A, Juhl HH. Participation in a clinical trial influences the future management of patients with gastro-oesophageal reflux disease in general practice. Aliment Pharmacol Ther 2006;24:1117-25. http://dx.doi.org/10.1111/j.1365-2036.2006.03046.x.
- Morton AN, Bradshaw CS, Fairley CK. Changes in the diagnosis and management of bacterial vaginosis following clinical research. Sex Health 2006;3:183-5. http://dx.doi.org/10.1071/SH06024.
- Pancorbo-Hidalgo PL, Garcia-Fernandez FP, Lopez-Medina IM, Lopez-Ortega J. Pressure ulcer care in Spain: nurses’ knowledge and clinical practice. J Adv Nurs 2007;58:327-38. http://dx.doi.org/10.1111/j.1365-2648.2007.04236.x.
- Pons J, Sais C, Illa C, Méndez R, Suñen E, Casas M, et al. Is there an association between the quality of hospitals' research and their quality of care?. J Health Serv Res Policy 2010;15:204-9. http://dx.doi.org/10.1258/jhsrp.2010.009125.
- Rich AL, Tata LJ, Free CM, Stanley RA, Peake MD, Baldwin DR, et al. How do patient and hospital features influence outcomes in small-cell lung cancer in England?. Br J Cancer 2011;105:746-52. http://dx.doi.org/10.1038/bjc.2011.310.
- Rochon J, du Bois A. Clinical research in epithelial ovarian cancer and patients' outcome. Ann Oncol 2011;22:vii16-vii19. http://dx.doi.org/10.1093/annonc/mdr421.
- Salbach NM, Guilcher SJ, Jaglal SB, Davis DA. Determinants of research use in clinical decision making among physical therapists providing services post-stroke: a cross-sectional study. Implementation Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-77.
- Abraham AJ, Knudsen HK, Rothrauff TC, Roman PM. The adoption of alcohol pharmacotherapies in the Clinical Trials Network: the influence of research network participation. J Subst Abuse Treat 2010;38:275-83. http://dx.doi.org/10.1016/j.jsat.2010.01.003.
- Carpenter WR, Reeder-Hayes K, Bainbridge J, Meyer A-M, Amos KD, Weiner BJ, et al. The role of organizational affiliations and research networks in the diffusion of breast cancer treatment innovation. Med Care 2011;49:172-9. http://dx.doi.org/10.1097/MLR.0b013e3182028ff2.
- Ducharme LJ, Knudsen HK, Roman PM, Johnson JA. Innovation adoption in substance abuse treatment: exposure, trialability, and the Clinical Trials Network. J Subst Abuse Treat 2007;32:321-9. http://dx.doi.org/10.1016/j.jsat.2006.05.021.
- Knudsen HK, Abraham AJ, Johnson JA, Roman PM. Buprenorphine adoption in the National Drug Abuse Treatment Clinical Trials Network. J Subst Abuse Treat 2009;37:307-12. http://dx.doi.org/10.1016/j.jsat.2008.12.004.
- Laliberte L, Fennell ML, Papandonatos G. The relationship of membership in research networks to compliance with treatment guidelines for early-stage breast cancer. Med Care 2005;43:471-9. http://dx.doi.org/10.1097/01.mlr.0000160416.66188.f5.
- Rhyne R, Sussman AL, Fernald D, Weller N, Daniels E, Williams RL, et al. Reports of persistent change in the clinical encounter following research participation: a report from the Primary Care Multiethnic Network (PRIME Net). J Am Board Fam Med 2011;24:496-502. http://dx.doi.org/10.3122/jabfm.2011.05.100295.
- Siegel RM, Bien J, Lichtenstein P, Davis J, Khoury JC, Knight JE, et al. A safety-net antibiotic prescription for otitis media: the effects of a PBRN study on patients and practitioners. Clin Pediatr 2006;45:518-24. http://dx.doi.org/10.1177/0009922806290567.
- Warnecke R, Johnson T, Kaluzny A, Ford L. The community clinical oncology program: its effect on clinical practice. J Qual Improv 1995;21:336-9.
- Chaney EF, Rubenstein LV, Liu C-F, Yano EM, Bolkan C, Lee M, et al. Implementing collaborative care for depression treatment in primary care: a cluster randomized evaluation of a quality improvement practice redesign. Implement Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-121.
- Goldberg HI, Neighbor WE, Hirsch IB, Cheadle AD, Ramsey SD, Gore E. Evidence-based management: using serial firm trials to improve diabetes care quality. Joint Commission J Qual Improvement 2002;28:155-66.
- Hall C, Sigford B, Sayer N. Practice changes associated with the Department of Veterans Affairs' Family Care Collaborative. J Gen Int Med 2010;25:18-26. http://dx.doi.org/10.1007/s11606-009-1125-3.
- Puoane T, Sanders D, Ashworth A, Chopra M, Strasser S, McCoy D. Improving the hospital management of malnourished children by participatory research. Int J Qual Health Care 2004;16:31-40. http://dx.doi.org/10.1093/intqhc/mzh002.
- Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implementation Sci 2012;7. http://dx.doi.org/10.1186/1748-5908-7-50.
- Kahn K, Ryan G, Beckett M, Taylor S, Berrebi C, Cho M, et al. Bridging the gap between basic science and clinical practice: a role for community clinicians. Implementation Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-34.
- Belkhodja O, Amara N, Landry R, Ouimet M. The extent and organizational determinants of research utilization in Canadian health services organizations. Sci Comm 2007;28:377-41. http://dx.doi.org/10.1177/1075547006298486.
- Tranmer JE, Lochhaus-Gerlach J, Lam M. The effect of staff nurse participation in a clinical nursing research project on attitude towards, access to, support of and use of research in the acute care setting. Can J Nurs Leadersh 2002;15:18-26.
- Niederhauser VP, Kohr L. Research endeavors among pediatric nurse practitioners (REAP) study. J Pediatr Health Care 2005;19:80-9. http://dx.doi.org/10.1016/j.pedhc.2004.08.007.
- Tsai SL. Nurses' participation and utilization of research in the Republic of China. Int J Nurs Stud 2000;37:435-44. http://dx.doi.org/10.1016/S0020-7489(00)00023-7.
- McCoy JM, Byers JF. Using evidence-based performance improvement in the community hospital setting. J Healthc Qual 2006;28:13-7. http://dx.doi.org/10.1111/j.1945-1474.2006.tb00639.x.
- Bush A. How has research in the last five years changed my clinical practice?. Arch Dis Childhood 2005;90:832-6. http://dx.doi.org/10.1136/adc.2004.066241.
- Christmas C, Durso SC, Kravet SJ, Wright SM. Advantages and challenges of working as a clinician in an academic department of medicine: academic clinicians' perspectives. J Grad Med Educa 2010;2:478-84. http://dx.doi.org/10.4300/JGME-D-10-00100.1.
- Francis J, Perlin JB. Improving performance through knowledge translation in the Veterans Health Administration. J Contin Educa Health 2006;26:63-71. http://dx.doi.org/10.1002/chp.52.
- Maher EJ, Timothy A, Squire CJ, Goodman A, Karp SJ, Paine CH, et al. The Royal College of Radiologists’ report. Audit: the use of radiotherapy for NSCLC in the UK. Clin Oncol 1993;5:72-9.
- Daniel D, Maund C, Butler K. Community hospital participation in a pilot project for venous thromboembolism quality measures: learning, collaboration, and early improvement. Joint Commission J Qual Patient Safety 2010;36:301-9.
- Myers S. The demonstration project as a procedure for accelerating the application of new technology. Washington, DC: Institute of Public Administration; 1978.
- Lewis S. Toward a general theory of indifference to research-based evidence. J Health Serv Res Policy 2007;12:166-72. http://dx.doi.org/10.1258/135581907781543094.
- Dixon S, Coleman P, Nicholl J, Brennan A, Touch S. Evaluation of the impact of a technology appraisal process in England: the South and West Development and Evaluation Committee. J Health Serv Res Policy 2003;8:18-24. http://dx.doi.org/10.1258/13558190360468182.
- Coleman J, Katz E, Menzel H. Medical innovation: a diffusion study. New York, NY: Bobbs-Merrill Company; 1966.
- Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat A-HS, Dellinger EP, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009;360:491-9. http://dx.doi.org/10.1056/NEJMsa0810119.
- Beckett M, Quiter E, Ryan G, Berrebi C, Taylor S, Cho M, et al. Bridging the gap between basic science and clinical practice: The role of organizations in addressing clinician barriers. Implementation Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-35.
- Ryan G, Berrebi C, Beckett M, Taylor S, Quiter E, Cho M, et al. Reengineering the clinical research enterprise to involve more community clinicians. Implementation Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-36.
- Teal R, Bergmire DM, Johnston M, Weiner BJ. Implementing community-based provider participation in research: an empirical study. Implementation Sci 2012;7. http://dx.doi.org/10.1186/1748-5908-7-41.
- Gilbert GH, Williams OD, Rindal DB, Pihlstrom DJ, Benjamin PL, Wallace MC, et al. The creation and development of the dental practice-based research network. J Am Dent Assoc 2008;139:74-81.
- Gilbert GH, Richman JS, Qvist V, Pihlstrom DJ, Foy PJ, Gordan VV, et al. Change in stated clinical practice associated with participation in the dental practice-based research network. Gen Dent 2010;58:520-8.
- Mold JW, Peterson KA. Primary care practice-based research networks: working at the Interface between research and quality improvement. Ann Fam Med 2005;3:12-20. http://dx.doi.org/10.1370/afm.303.
- Nagykaldi Z, Mold JW, Aspy CB. Practice facilitators: a review of the literature. Fam Med 2005;37:581-8.
- Lindbloom EJ, Ewigman BG, Hickner JM. Practice-based research networks. Med Care 2004;42:III-45-III-49. http://dx.doi.org/10.1097/01.mlr.0000119397.65643.d4.
- Williams RL, Rhyne RL. No longer simply a practice-based research network (PBRN) health improvement networks. J Am Board Fam Med 2011;24:485-8. http://dx.doi.org/10.3122/jabfm.2011.05.110102.
- Aiello EJ, Buist DSM, Wagner EH, Tuzzio L, Greene SM, Lamerato LE, et al. Diffusion of aromatase inhibitors for breast cancer therapy between 1996 and 2003 in the Cancer Research Network. Breast Cancer Res Treat 2008;107:397-403. http://dx.doi.org/10.1007/s10549-007-9558-z.
- Bodenheimer T, Young DM, MacGregor K, Holtrop JS. Practice-based research in primary care: facilitator of, or barrier to, practice improvement?. Ann Fam Med 2005;3:S28-32. http://dx.doi.org/10.1370/afm.341.
- Thomas P, Graffy J, Wallace P, Kirby M. How primary care networks can help integrate academic and service initiatives in primary care. Ann Fam Med 2006;4:235-9. http://dx.doi.org/10.1370/afm.521.
- Goodwin N, Peck E, Freeman T, Posaner R. Networks briefing – key lessons for networks. London: NHS SDO programme; 2004.
- Morris E, Downing A, Gregory W, Seymour M, Thomas J, Whitehouse L, et al. LB36 – Colorectal Cancer Research Activity and Its Relationship to Clinical Outcomes 2012. http://conference.ncri.org.uk/abstracts/2012/abstracts/LB36.html (accessed 8 August 2013).
- Whicher DM, Chalkidou K, Dhalla IA, Levin L, Tunis S. Comparative effectiveness research in Ontario, Canada: producing relevant and timely information for health care decision makers. Milbank Q 2009;87:585-606. http://dx.doi.org/10.1111/j.1468-0009.2009.00572.x.
- Kuehlein T, Goetz K, Laux G, Gutscher A, Szecsenyi J, Joos S. Antibiotics in urinary-tract infections. Sustained change in prescribing habits by practice test and self-reflection: a mixed methods before–after study. BMJ Qual Safety 2011;20:522-6. http://dx.doi.org/10.1136/bmjqs.2010.047357.
- Walshe K, Rundall TG. Evidence-based management: from theory to practice in health care. Milbank Q 2001;79:429-57. http://dx.doi.org/10.1111/1468-0009.00214.
- Bullock A, Morris ZS, Atwell C. Collaboration between health services managers and researchers: making a difference?. J Health Serv Res Policy 2012;17:2-10. http://dx.doi.org/10.1258/jhsrp.2011.011099.
- Bullock A, Morris ZS, Atwell C. Final report. Norwich: Queen's Printer and Controller of HMSO; 2012.
- Denis J-L, Lehoux P, Hivon M, Champagne F. Creating a new articulation between research and practice through policy? The views and experiences of researchers and practitioners. J Health Serv Res Policy 2003;8:44-50. http://dx.doi.org/10.1258/135581903322405162.
- Ross S, Lavis J, Rodriguez C, Woodside J, Denis J-L. Partnership experiences: involving decision-makers in the research process. J Health Serv Res Policy 2003;8:26-34. http://dx.doi.org/10.1258/135581903322405144.
- Eagar K, Cromwell D, Owen A, Senior K, Gordon R, Green J. Health services research and development in practice: an Australian experience. J Health Serv Res Policy 2003;8:7-13. http://dx.doi.org/10.1258/135581903322405117.
- Bowen S, Martens P. The Need to Know Team . Demystifying knowledge translation: learning from the community. J Health Serv Res Policy 2005;10:203-11.
- Denis J-L, Lomas J, Stipich N. Creating receptor capacity for research in the health system: the Executive Training for Research Application (EXTRA) program in Canada. J Health Serv Res Policy 2008;13:1-7. http://dx.doi.org/10.1258/jhsrp.2007.007123.
- Antil T, Desrochers M, Joubert P, Bouchard C. Implementation of an innovative grant programme to build partnerships between researchers, decision-makers and practitioners: the experience of the Quebec Social Research Council. J Health Serv Res Policy 2003;8:35-43. http://dx.doi.org/10.1258/135581903322405153.
- Kothari A, Birch S, Charles C. ‘Interaction’ and research utilisation in health policies and programs: does it work?. Health Policy 2005;71:117-25. http://dx.doi.org/10.1016/j.healthpol.2004.03.010.
- Gold M, Taylor EF. Moving research into practice: lessons from the US Agency for Healthcare Research and Quality's IDSRN program. Implementation Sci 2007;2. http://dx.doi.org/10.1186/1748-5908-2-9.
- Harries U, Elliott H, Higgins A. Evidence-based policy-making in the NHS: exploring the interface between research and the commissioning process. J Public Health Med 1999;21:29-36. http://dx.doi.org/10.1093/pubmed/21.1.29.
- Elliott H, Popay J. How are policy makers using evidence? Models of research utilisation and local NHS policy making. J Epidemiol Community Health 2000;54:461-8. http://dx.doi.org/10.1136/jech.54.6.461.
- Glaser EM, Taylor SH. Factors influencing the success of applied research. Am Psychol 1973;28:140-6. http://dx.doi.org/10.1037/h0034203.
- Jacob R, Battista RN. Assessing technology assessment. Early results of the Quebec experience. Int J Technol Assess Health Care 1993;9:564-72. http://dx.doi.org/10.1017/S0266462300005481.
- Jacob R, McGregor M. Assessing the impact of health technology assessment. Int J Technol Assess Health Care 1997;13:68-80. http://dx.doi.org/10.1017/S0266462300010242.
- Assessment of health research fund outputs and outcomes: 1995–2003. Edmonton, AB: AHFfMR; 2003.
- Wooding S, Hanney S, Pollitt A, Buxton M, Grant J. Project retrosight: understanding the returns from cardiovascular and stroke research: the policy report. Santa Monica, CA: RAND Corporation; 2011.
- Allergy. London: The Stationery Office; 2007.
- Sankaranarayanan R, Sauvaget C, Ramadas K, Ngoma T, Teguete I, Muwonge R, et al. Clinical trials of cancer screening in the developing world and their impact on cancer healthcare. Ann Oncol 2011;22:vii20-vii28. http://dx.doi.org/10.1093/annonc/mdr422.
- Ullrich S, McCutcheon H, Parker B. Reclaiming time for nursing practice in nutritional care: outcomes of implementing Protected Mealtimes in a residential aged care setting. J Clin Nurs 2011;20:1339-48. http://dx.doi.org/10.1111/j.1365-2702.2010.03598.x.
- Johnson PA, Bookman A, Bailyn L, Harrington M, Orton P. Innovation in ambulatory care: a collaborative approach to redesigning the health care workplace. Acad Med 2011;86:211-16. http://dx.doi.org/10.1097/ACM.0b013e318204618e.
- Alligood MR. Theory-based practice in a major medical centre. J Nurs Manag 2011;19:981-8. http://dx.doi.org/10.1111/j.1365-2834.2011.01327.x.
- Fryer K, Antony J, Ogden S. Performance management in the public sector. Int J Public Sector Manage 2009;22:478-98. http://dx.doi.org/10.1108/09513550910982850.
- Bourne M, Kennerley M, Franco-Santos M. Managing through measures: a study of impact on performance. J Manufacturing Technol Manage 2004;16:373-95. http://dx.doi.org/10.1108/17410380510594480.
- Best A, Greenhalgh T, Lewis S, Saul JE, Carroll S, Bitz J. Large-system transformation in health care: a realist review. Milbank Q 2012;90:421-56. http://dx.doi.org/10.1111/j.1468-0009.2012.00670.x.
- Craig TJ, Perlin JB, Fleming BB. Self-reported performance improvement strategies of highly successful Veterans Health Administration facilities. Am J Med Qual 2007;22:438-44. http://dx.doi.org/10.1177/1062860607304928.
- Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L, et al. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Int Med 2004;141:938-45. http://dx.doi.org/10.7326/0003-4819-141-12-200412210-00010.
- Gao J, Moran E, Almenoff PL, Render ML, Campbell J, Jha AK. Variations in efficiency and the relationship to quality of care in the veterans health system. Health Aff 2011;30:655-63. http://dx.doi.org/10.1377/hlthaff.2010.0435.
- Lomas J. Health services research: more lessons from Kaiser Permanente and Veteran's Affairs healthcare system. BMJ 2003;327:1301-2.
- Longo WE, Cheadle W, Fink A, Kozol R, DePalma R, Rege R, et al. The role of the Veterans Affairs Medical Centers in patient care, surgical education, research and faculty development. Am J Surg 2005;190:662-75. http://dx.doi.org/10.1016/j.amjsurg.2005.07.001.
- Jha A, Perlin J, Kizer K, Dudley R. Effect of the transformation of the Veterans Affairs health care system on the qualtiy of care. N Engl J Med 2003;348:2218-27.
- Perlin JB, Kolodner RM, Roswell RH. The veterans health administration: quality, value, accountability, and information as transforming strategies for patient-centered care. Am J Manag Care 2004;20:828-36. http://dx.doi.org/10.12927/hcpap. 17381.
- Kizer K, Kirsh S. The double edged sword of performance measurement. J Gen Int Med 2012;27:395-7. http://dx.doi.org/10.1007/s11606-011-1981-5.
- Bernard D, Coburn K, Miani M. Health and disease management within an academic health system. Dis Manag Health Outcomes 2000;7:21-37. http://dx.doi.org/10.2165/00115677-200007010-00004.
- Peterson E, Shatin D, McCarthy D. Health services research at United Healthcare Corporation: the role of the center for health care policy and evaluation. Med Care Res Rev 1996;53:S65-S76.
- Kislov R, Harvey G, Walshe K. Collaborations for leadership in applied health research and care: lessons from the theory of communities of practice. Implementation Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-64.
- Garrido T, Barbeau R. The northern california perinatal research unit: a hybrid model bridging research, quality improvement and clinical practice. Permanente J 2010;14:51-6.
- Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy 2010;6:145-59. http://dx.doi.org/10.1332/174426410X502284.
- Nutley S, Walter I, Davies H. Using evidence: how research can inform public services. Bristol: The Policy Press; 2007.
- Demakis JG, McQueen L, Kizer KW, Feussner JR. Quality Enhancement Research Initiative (QUERI): a collaboration between research and clinical practice. Med Care 2000;38:17-25. http://dx.doi.org/10.1097/00005650-200006001-00003.
- The mandate: a mandate from the Government to the NHS Commissioning Board: April 2013 to March 2015. London: DoH; 2012.
- Academic health science networks. London: DoH; 2012.
- Schreyogg J, von Reitzenstein C. Strategic groups and performance differences among academic medical centers. Health Care Manag Rev 2008;33:225-33. http://dx.doi.org/10.1097/01.HMR.0000324908.42143.1f.
Appendix 1 Project protocol – version 3
Engagement in Research: Does it improve performance at a clinician, team, service and organisational level in healthcare organisations
1. Aims/Objectives
This evidence synthesis aims to identify, analyse and synthesize the evidence related to the question:
-
Does engagement in research improve performance at a clinician, team, service and organisational level in healthcare organisations?
The overall question has several aspects. These include questions and debates about:
-
What definitions or aspects of engagement in research might be relevant for addressing the overall question?
-
What dimensions of performance in healthcare organisations will be relevant for addressing the overall question, and how is improvement best measured?
-
Does research engagement increase the ability and/or willingness of healthcare staff and systems to absorb and use relevant research findings from wherever they originate?
-
Does research engagement by a range of stakeholders in healthcare organisations lead to the production of knowledge that addresses the needs of healthcare staff and patients more effectively and that will be more likely to be used in the local setting, and possibly other similar ones?
-
In what ways does the adoption/utilisation of research improve performance (however measured) at clinician, team, service, and organisational level?
-
Does research engagement encourage a more questioning and open culture that facilitates performance of healthcare staff and/or systems?
We do not believe there is an existing coherent body of evidence that has explicitly focused on the overall question. Hence, an initial objective will be to identify and explore first, the theories that support the assumptions underpinning these questions and second, whatever bodies of literature might have something relevant to say on the question or a range of related issues. This initial exercise will be undertaken as a way of helping to focus the project and the literature search.
Once the scope of the literature search has been determined and the search undertaken the objective will be to map the resulting evidence so as to guide the analysis. The literature is likely to be highly diverse and of variable relevance to the question being addressed. A key objective will be to find a way of including as broad a range of literature as possible while using it to address the research question appropriately and effectively.
A final objective will be to make the findings as relevant as possible to the NHS by attempting to illustrate how far different types of research and modes of engagement in research improve performance at the various NHS levels, and identify the mechanisms that might be involved.
2. Background
Taking the wording of the Invitation to Tender (ITT), and the questions set out above, two key phrases emerge: ‘research engagement’ and ‘improved performance’. Very loosely bounded bodies of knowledge are specifically connected to each of these, and are thus relevant to our review. Related to the phrase ‘research engagement’ there is the literature about the different types of engagement in research, the benefits that come from health research and how they might be assessed, and the different ways in which health research systems might be organised to maximise such benefits. There is another body of knowledge related to the issue of the meaning and measurement of improved performance in health care systems - how far research engagement features in this will need to be examined. In this section we outline these two bodies of work. Where studies can be identified that have attempted to link research engagement (however interpreted) and improved healthcare performance they will, of course be very important, and there are a few examples of studies that do link aspects of research engagement (for example, involvement in clinical trials) to aspects of performance (for example, levels of mortality and guideline adherence) (Majumdaret al. , 2008).
We also believe, however, that there is also a much wider literature that covers a range of potentially relevant and sometimes overlapping topics which might provide lessons and will be included in the review. We will need to interrogate and arrange the various bodies of knowledge carefully before a clear picture, or evidence synthesis, can emerge, and those wider bodies of knowledge are briefly introduced later in section 5 on the contribution of existing research.
Research engagement, the organisation of the health research system and the assessment of impacts from research
Discussions about reforms to health research systems help define the scope of the phrases ‘research engagement’, and ‘engagement in research’, that are used in the ITT. The 2006 health research strategy document from the Department of Health (DH) also illustrates the desire to involve the healthcare system in research (DH, 2006). In line with such documents we view the term ‘research engagement’ broadly and take it to include healthcare staff either being involved in undertaking research studies in some way, and/or undertaking research training for a research degree. We also recognise that it can include playing a role in research design and commissioning, and in research networks, partnerships or collaborations. For example, the DH 2006 strategy document specifically referred to providing support for the NHS Chief Executive's Forum of the Service Delivery and Organisation (SDO) Programme (DH, 2006). It can also mean joining journal clubs and taking critical appraisal courses in order to understand and interpret research findings better.
Engagement in research includes engagement in the full range of health research from biomedical work through clinical studies to health services research and psychosocial care research. Indeed, clinical staff undertaking basic research can generate findings that, it is sometimes claimed, have a better chance than other such research of leading to more applied research and contributing to major advances in healthcare (Hanneyet al. , 2006; Woodinget al. , 2011). There have been various analyses of the ways in which different groups of healthcare staff might engage in research, including, for example, the role of nurses and the allied health professions – considered by a taskforce set up to advise the Higher Education Funding Council for England (HEFCE) and the DH (HEFCE, 2001).
More generally, much of the discussion about how to organise health research systems has traditionally focused on the best ways to produce research. But alongside this there have been growing attempts to broaden this analysis and consider the needs of various stakeholders, including patients, in relation to health research (Hanneyet al. , 2010). Several overlapping aspects of this discussion are potentially relevant to the evidence synthesis proposed here. First, there is the notion that staff within a healthcare system who engage in research are more likely to absorb and utilise research findings from research undertaken anywhere. Second, there is the idea that engaging healthcare staff and patients in health research leads to research being undertaken that is more likely to be relevant to the needs of the healthcare system and thus more likely to be used by healthcare and management staff. These two concepts overlap and some of the literature about both, and its relevance to the overall research question raised in this ITT, are explored below.
1. Research engagement as a way of increasing ability and willingness to use research
When clinicians and managers in a healthcare system are seen as stakeholders in the research system then their engagement in research can be a way of boosting their ability and willingness to use research from wherever it might originate. Drawing on parallels with the literature from analyses of industrial research (Cohen and Levinthal, 1989; Rosenburg, 1990), Buxton and Hanney at the Health Economics Research Group (HERG) included this increased capacity to use research as one of the benefits identified in their multidimensional categorisation of benefits from health research (Buxton and Hanney, 1994, 1996 and 1998; Hanneyet al. , 2004).
The concept of ‘absorptive capacity’ described by Cohen and Levinthal in the context of industrial research was further developed by various others, including Zahra and George (2002) in their article, Absorptive Capacity: A Review, Reconceptualisation, and Extension. As indicated in its title, this article and much of the later writing on absorptive capacity has broadened understanding of this concept in which originally the extent of an organisation's engagement in research was a key factor. The result is that the earlier interpretation has perhaps sometimes been overlooked or downplayed. One paper, for example, identifies the three key measures of absorptive capacity as environmental scanning, collection of satisfaction data, and the level of workforce professionalism (Knudsen and Roman, 2004). We note, however, that some work to apply the concept (including aspects of the original conception) to health organisations has also been conducted in Canada as part of a stream of research on knowledge utilisation led by Rejean Landry – an important article in this stream of work is discussed below (Belkhodjaet al. , 2007).
Despite the apparently rather limited application of the term absorptive capacity within healthcare research, the specific concept used in Buxton and Hanney's Payback Framework - the notion that engagement in health research helps to increase the capacity to use health research – has been replicated in other health research impact assessment frameworks that build on the Payback Framework, for example in Canada (Frank and Nason, 2009; Canadian Academy of Health Sciences, 2009). However, empirical examples of the application of this concept in the studies of the payback from health research are often somewhat hidden because they appear in case studies applying the full payback framework, nevertheless, some can be identified including Buxton et al. (1994) and von Walden Laing (1997).
The concept of absorptive capacity is relevant to all parts of the healthcare system within a country. At a national level, for example, one of the benefits of undertaking research in the UK on the best way to audit the performance of intensive care units was so that key findings from the considerably larger body of research on this topic in the USA could fruitfully be absorbed into the UK (Elliott and Hanney, 1994). More generally, however, some of these issues are of critical importance when the development of health research systems in developing countries is being considered by the World Health Organisation (Panget al. , 2003) and other international bodies.
Without necessarily being explicit about the concept of absorptive capacity, there have been repeated attempts in England over the last 20 years to build a health R&D system more closely into the NHS. The major reports on this include the House of Lords report (House of Lords, 1988) that led to the creation of the NHS R&D Programme (DH, 1991; Peckham, 1991; Harrison and New, 2002; Shergold and Grant, 2008). This was seen as the first attempt in any country to build a health research system so comprehensively into the national healthcare system (Black, 1997). After some retrenchment with the abolition of the regional level within the NHS (Hanneyet al. , 2010), the recent reforms have sought to further develop this approach (DH, 2006). In particular, recent developments such as Biomedical Research Centres and Units, and the Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) are important attempts to develop collaboration, build research capacities and more fully incorporate research into the NHS (NIHR, 2009; Marjanovicet al. , 2009). Analysis of these developments highlights that even within NIHR two rather different notions of building the research system into the health care system are at work: the Biomedical Research Centres (and subsequently the Academic Health Science Centres) are ways to bring a concentration of cutting edge research into the healthcare system, whereas the CLAHRCs are attempting to integrate research into the healthcare system more widely (Hanneyet al. , 2010). Many of the accounts of the wider developments over the last 20 years have focused less on the ‘absorptive capacity’ type benefits that might come from engagement in research, and more on ways to improve the relevance and usefulness of the research that is undertaken. Some of the key literature on this second topic is outlined next.
2. Research engagement as a way of ensuring the research produced has more likelihood of being used to improve the healthcare system
Denis and Lomas (2003) traced the development of a collaborative approach to research in which researchers and potential users work together to identify research agendas, commission research, and so on. They identified the study by Kogan and Henkel of the English health department's R&D system (Kogan and Henkel, 1983) as a key early contribution (see also Koganet al. , 2006). Considerable work to analyse and operationalise this collaborative approach has been undertaken in the Canadian Health Services Research Foundation led by Jonathan Lomas (Lomas, 1990, 2000). His concept of ‘linkage and exchange’ between users and researchers has been particularly influential.Collaborative approaches to research somewhat mirror the concept of ‘Mode 2’ research, i.e. research that is context-driven, problem-focused and interdisciplinary (Gibbonset al. , 1994; Nowotnyet al. , 2001). It is now widely argued that a collaborative approach will lead to research that is more likely to meet needs of a healthcare system, and hence be used and improve performance. Clear evidence to support this hypothesis comes from the evaluation of the impact of the NHS Health Technology Assessment Programme (Hanneyet al. , 2007). This stream of work has considerable overlaps, as noted, with the work on absorptive capacity described above. In current initiatives and writings there is often overlapping emphasis on: first, collaborative working with users of research so as to produce applied research that is more likely to be of use and be applied by potential users; and second, on engaging users in the wider research enterprise in a variety of ways and through various networks, to encourage the uptake of research findings more generally. The following definition by Walshe and Davies (2010) of role of CLAHRCs highlights these overlapping issues:
‘The collaborations have three key interlinked functions: conducting high quality applied health research; implementing the findings from this and other research in clinical practice; and increasing the capacity of local NHS organisations to engage with and apply research.’
Defining what is meant by performance at a clinician, team, service and organisational level in healthcare organisations
Turning to performance, the other key issue in the ITT, the case for the universal measurement of clinical and organisational performance is now well established (Goddardet al. , 2002). But the specific nature of performance measurement is a contested topic about which there is a considerable literature. There is much dispute about how to define performance and no neat prescriptions for the design of performance measurement systems in the NHS and the public sector more generally. There is also vigorous debate about the relative merits of top-down hierarchical systems and approaches based on horizontal networks (Goddard and Mannion, 2004).
A variety of systems to assess NHS performance have been used in recent years – the star rating system for NHS trusts, clinical governance reviews, National Service Framework reviews, national and local audits, performance monitoring by strategic health authorities, and quality accounts. (Mannionet al. 2010, Mannionet al., 2005; House of Commons at www.parliament.uk/briefingpapers/commons/lib/research/briefings/snsg-05237.pdf) The star rating system was originally developed by the Commission for Healthcare Improvement and has been described as the fulcrum around which the NHS performance management system operates. But its coverage has also been challenged; many NHS staff think that important areas of clinical practice are under-valued or missed completely (Mannion 2005). A quick trawl through these systems reveals some indicators related to research activity or wider research engagement within trusts. The relatively new ‘Quality Account’ will, where relevant, include data about the number of trust patients recruited to clinical research and help NHS organisations to develop indicators that have the potential to consider the quality of research. Our review will explore these and related issues, including how far the concept of performance as a professional involves engagement with research.
At a micro level there are suggestions of limited correlations between impact (or performance) of specific research projects in terms of the production of knowledge and impact of the projects in terms of contributing to wider impacts such as improved health policies and health outcomes (Woodinget al. , 2011). On a related but slightly different point, however, a Spanish study found a low to moderate correlation between a hospital's risk-adjusted mortality ratio and the weighted citations ratio of the research conducted by the hospital (Pons et al. , 2010). We will also explore the relations between research engagement, organisational culture and the performance of healthcare organisations. In addition we will attempt to gather evidence about whether the nature and quality of researchers’ engagement with patients in research studies is associated with improved healthcare performance. We will necessarily look back to the systems in operation over the last few years. However, in seeking to make this review relevant to the NHS now, we will also take heed of the present Government's most recent decisions.
Of the questions listed above, the one about whether the utilisation of research improves performance probably forms a key link between the two main issues of research engagement and improved performance. Furthermore, it is probably one of the questions where the existing evidence base is strongest because there is now a considerable body of literature that demonstrates that the findings from health research can lead to outcomes such as improved health and sometimes greater cost-effectiveness within the healthcare system. An important element of this evidence is based on the Payback Framework (Buxton and Hanney, 1996), and further details of this evidence are described below in the section on the need for the research. Not all approaches to research impact assessment attempt to go as far as the Payback Framework in including a wide range of benefits that could be considered as improvements to healthcare performance, but it might be useful to explore how far it is useful to draw on the more comprehensive approaches to try to establish pathways between research utilisation and improved healthcare performance.
This debate also links into the issue of the various levels within healthcare organisations at which it might be hoped that research engagement might improve performance. There are clearly complex overlaps between improved performance at the various levels of healthcare organisations: if individual clinicians improve their performance (for example, as assessed by clinical outcomes) as a result of utilising research findings then the performance measured on similar outcomes at team and service levels will presumably also improve. However, in at least some cases a key to improved uptake of research findings by clinicians is an improvement in the way in which the organisation itself is structured and managed.
The discussion about performance at different levels of healthcare organisations is complicated, but also illuminated, by discussions about distributed leadership which suggest that the complex nature of health care organisations as professional bureaucracies means that leadership is needed at different levels and not simply at the top and that there is a requirement for a large number of leaders from various professional backgrounds. Of particular importance are the clinical microsystems with which healthcare professionals, including doctors, often identify. Research suggests that team leadership of microsystems is a key factor in achieving high levels of performance. There is also evidence that constellations of leaders are needed at different levels when major change programmes are undertaken: ‘Improvement is likely to be confined to microsystems and teams unless there is alignment between top level leadership, and those working in other parts of the organisation.’ (Ham and Dickinson, 2008). In terms of improved performance resulting from the implementation of research findings, the relevance of these types of arguments is likely to depend partly on the types of research findings being implemented, as discussed above.
3. Need
As noted in the ITT, ‘the increasing investment in NHS R&D over recent years has focused attention on how NIHR measures or understands the impact of its investment in health research.’ The research proposed here will contribute to the NHS in several ways that have been identified in previous studies of the benefits of health research conducted for the NHS (Buxtonet al. , 2000). First, such studies are an important part of the processes of providing accountability for public expenditure by demonstrating the undoubted benefits that can arise from spending resources on health research. But, in addition, it has long been argued that the assessment of the benefits accruing for health research can also contribute to the development of ways to improve the conduct, management and organisation of health research so as to enhance those benefits.
Since 1993 a portfolio of work assessing the payback from health research that has been conducted at the Health Economics Research Group (HERG) by members of the research team, and by others. The payback framework developed by HERG consists of a multi-dimensional categorisation of benefits form health research and a model of how best to assess them. The multi-dimensional categorisation of benefits includes not only traditional outputs from research (such as the knowledge contained in papers and the building of research capacity) but also additional benefits such as improved health policies, better health care and economic benefits. The model incorporates the thinking of Kogan and Henkel described above and highlights the importance of the permeability at the interfaces where researchers and potential users of the research might interact as set out in the collaborative approach. A number of studies conducted by HERG have demonstrated the wide range of benefits that have, and in some cases continue, to accrue from several NHS R&D programmes: the North Thames NHS Region R&D Programme (Buxtonet al. , 2000; Hanney et al. , 1999, 2000); the NHS Implementation Methods Programme (Soper and Hanney, 2007); the NHS Health Technology Assessment Programme (Hanneyet al. , 2007). Other studies have shown the impacts from some other NHS programmes of research, including other regional programmes (Fergusonet al. , 2000; Elliott and Popay, 2000), other NHS central R&D Programmes (Wisely, 2001a and 2001b) and from the SDO Programme (Peckhamet al. , 2008). In addition, HERG led an evaluation for the MRC/Wellcome Trust/Academy of Medical Sciences of the economic value of all UK medical research, including that funded by the NHS, in the fields of cardiovascular disease and mental health (Buxtonet al. , 2008).
However, and as noted in the brief overview of the literature above, there is comparatively little evidence about the benefits from research engagement per se and almost no attention has been paid to assessing this activity through the performance measurement systems currently used in the NHS. This is a large gap. The improvements in healthcare performance that could be linked to research engagement are not alternatives to the paybacks already identified as coming from research; they are both overlapping and additional ways in which impacts might be enhanced. We believe, therefore, that there could be considerable benefits from a proper analysis of whether research engagement per se leads to improvements in healthcare performance.
The second main contribution that will come from this project is that it will analyse how research engagement might improve performance, through what mechanisms, and in what contexts. In the current climate of deficit reduction in the public sector it is important that the health research system does not only seek to identify and demonstrate the benefits that accrue from the resources spent on health research, but is also able to illustrate how it is attempting to identify more effective mechanisms for achieving and enhancing those benefits. Exploring the nature of research engagement and its impact on performance across the service is one way of doing this. Such an analysis should make an important contribution to current efforts to organise the interface between the health research system and the healthcare system so as to maximise the benefits that might emerge from engagement in research (DH, 2006; NIHR, 2009).
This work should, therefore, complement, and add value, to a considerable range of initiatives that are currently underway in the English health R&D system, such as the development of the CLAHRCs (NIHR, 2009). As explored in detail by Steve Hanney and Bryony Soper, and colleagues, recent reforms in the health R&D system are a further major advance in a series of attempts to ensure the R&D system more closely meets the needs of a range of stakeholders, including practitioners, managers, and policymakers within healthcare organisations, and patients and the public (Hanneyet al. , 2010). As far back as the House of Lords report of 1988, and indeed earlier, there have been attempts to ensure the health research is embedded into the NHS. The Best Research for Best Health reforms took this process further (DH, 2006) and the Cooksey Report (Cooksey, 2006) ensured that the whole publicly funded R&D system in the UK was considered at a coherent system-wide level.
The analysis of all these steps concluded that major advances are being made because of the system-wide approach being adopted (Hanneyet al. , 2010). In this review of research engagement we will take a broad perspective. But it was also shown that progress was being made in the R&D system because attention was being given to developing mechanisms at various interfaces between the health research system and various stakeholders. Even here, however, several areas were identified where the mechanisms had not yet been fully developed, and it was therefore unlikely that the system was achieving its full potential. In the case of the benefits that might come from research engagement the issues have so far been so under-explored in any systematic way that it is very unlikely the most effective set of mechanisms (that might work in specific contexts) have been identified. This project will aim to make progress in this area.
4. Methods
Background
This evidence synthesis draws on recent and on-going methodological developments in the field of conducting systematic reviews. Whilst our task is somewhat different from that of Greenhalgh and colleagues (2005), we propose to draw, in particular, on the approach developed by that team as it provides a rigorous and comprehensive approach to reviewing literature. This approach has six distinct phases: the planning phases, the search phase, the mapping phase, the appraisal phase, the synthesis phase and the recommendations phase. Each of these will be dealt with in turn in this section. We have also accepted an invitation to participate in the network created by the SDO-funded RAMESES project, set up to further methodological development in the field of realist reviews and meta narrative synthesis.
1. Planning the review
The team has been drawn together to reflect some of the key disciplinary fields relevant to the review process: the social science literature on improve the uptake of research (SH, AB), translational research (AB), organizational change (BS), health services research (SH, AB and BS), and science and policy (CD). Further relevant expertise will be accessed via the advisory group. This phase will start before the formal first month of the project (see timetable below) and could involve review planning, discussions with advisory group members including the two public and patient representatives, and team ‘brainstorming meetings’ to begin to scope the review. At this stage, the review does not have a single, focused question, but, rather, a menu of questions reflecting the tender specification and issues elaborated on earlier. These questions include:
-
What definitions or aspects of engagement in research might be relevant for addressing the overall question?
-
What dimensions of performance in healthcare organisations will be relevant for addressing the overall question, and how is improvement best measured?
-
Does research engagement increase the ability and/or willingness of healthcare staff as individuals to absorb and use relevant research findings from wherever they originate?
-
Does research engagement increase the ability and/or willingness of healthcare teams to absorb and use relevant research findings from wherever they originate?
-
Does research engagement increase the ability and/or willingness of healthcare services and organisations to absorb and use relevant research findings from wherever they originate?
-
Does research engagement by a range of stakeholders in healthcare organisations lead to the production of knowledge that will be more likely to be used in that setting, and possibly other similar ones?
-
Does the engagement of patients in research agenda setting and advisory groups, etc. lead to the production of knowledge that will be more likely to be utilised, including by patients themselves drawing it to the attention of healthcare staff and organisations?
-
In what ways does the adoption/utilisation of research improve performance (however measured) at clinician, team, service, and organisational level?
-
Does research engagement encourage a more questioning and open culture that facilitates performance of healthcare staff and/or systems?
-
Are some types of research engagement, and types of research, more likely to produce improved performance than others?
-
Can we identify the levels of healthcare organisations that are most likely to improve performance as a result of the various types of research engagement?
2. Search strategy
Greenhalghet al. (2005) recommend an initial search based on intuition, informal networking and browsing to begin to develop an understanding of the field under review. This approach will be used (in addition to collaboration with information scientists) to identify key words and confirm the likely range of databases and the years to be searched.
The more focused, systematic search will include three elements: database searches, web searches and expert contacts. The bibliographic searches are likely to be conducted of the following databases (ASSIA, British Nursing Index, CINAHL, EMBASE, HMIC, IBSS, Medline, PsychInfo and Sociological Abstracts) and is likely to include as a minimum papers from 1980–2010. In addition an internet search will be conducted. Key internet sources would include the Department of Health website, NIHR website, World Health Organization and the websites of a number of Canadian health research organizations that have conducted research in this area. References from article reference lists and recommendations will also be used to identify papers. We will also communicate directly with authors to follow up on references to on-going research. We will consult with the expert advisory group in order to access any unpublished work and any new or ongoing research. This process has been found to be particularly useful in previous reviews (Boazet al. , 2009).
We believe the grey literature may be particularly relevant as part of the search and data collection in this review, and therefore set out here in more detail how we shall conduct it, although some of this overlaps with the general description above. The search will include: a search of SIGLE (the grey literature database), web searches including key organizations in the field such as NICE, Department of Health, NIHR, Canadian Health Services Research Foundation, a search of relevant library catalogues including the University of Birmingham Centre for Health Services Management library and an e-mail consultation with key experts in the field (to access new and unpublished research). We will also consult with the research advisory group. Initial sifting will be based on titles and abstracts, with full papers obtained where there is any uncertainty regarding inclusion. The initial inclusion criteria will be relevance to the review questions. A data sheet will be completed for all papers included in the review, reporting the bibliographic details, the research question, methods, theoretical approach, findings etc.
3. Mapping
It is often considered to be good practice to include a mapping or scoping stage in the review process or indeed to consider scoping exercises as rigorous outputs in their own right (Arksey et al. , 2005). From our previous experience of conducting reviews in the field of assessing the impact of research (Boazet al. , 2009, Hanneyet al. , 2007), and our initial analysis of the literature related to research engagement, we are aware that the literature we are seeking is diverse in terms of methodological approaches and types of output (for example, we will need to include reports as well as academic papers). It is located in many different fields and will involve database searches in various fields including health and social sciences. These fields are described in the next section on the contribution of existing research, but will include medical education, organizational change, evidence based medicine, social studies of science. The mapping stage will seek to capture this wider literature on, for example, types of involvement, levels of engagement, types of people, teams and organizations, types of research undertaken, and when and how professionals are involved.
This map will be used to stimulate discussion within the team and with the advisory group and funders in order to develop a focused review question for a more rigorous and systematic review. Additional searches will be carried out to address any gaps identified. As articulated the research question currently contains a number of unknowns which the mapping phase should help to clarify.
4. Appraisal
We will use the more generic appraisal questions developed by Dixon Woodset al. (2004) when sifting for relevance, using methodologically specific checklists to check for validity. It is likely that this review will draw together a wide range of types of research. This will involve using a wide range of quality appraisal tools. These have recently been drawn together in a number of texts (for example, Pettigrew and Roberts, 2007) We will therefore use appropriate appraisal techniques to evaluate the studies identified. Quality appraisal tools are considered to be useful in making quality judgements explicit (Dixon Woodset al. , 2007). AB has considerable experience of both teaching critical appraisal and contributing to methodological development in the field. For example, she developed a generic critical appraisal tool (Long et al. , 2004) which can be used where no quality criteria exist (for example, policy documents and service user evidence). Quality scores will be reported in the data tables and on the data extraction forms. Drawing on the work of Dixon Woods and colleagues (2007) we will exclude studies that are considered to be fatally flawed following quality appraisal.
5. Synthesis
A variety of approaches to synthesis have emerged in the past ten years (Boazet al. 2006), although many of them are still in the process of being tested and developed. Methodological developments in research synthesis have generated a wide variety of approaches to analysis, ranging from conventional tabulation and thematic analysis to ideas webbing, concept mapping and theory building. Our approach is informed by the work of Popayet al. (2006), Greenhalgh and colleagues (2005) and Pawson (2006). Greenhalghet al. (2005) and Pawson (2006) are particularly relevant to complex review questions and diverse literatures (in terms of both types of research and research drawn from different fields). In their review of the diffusion of innovation literature, Greenhalghet al. (2005) developed an approach to synthesis that can bring together diverse bodies of literature as meta-narratives. Realist Synthesis (Pawson 2006) can also be used with very diverse literatures (for example, a review of mentoring studies included qualitative research, surveys, evaluations, a path analysis, a theoretical piece and a meta analysis). Rather than seeking to synthesise all research identified, a purposive sample of studies is used to revise an existing explanatory model (derived from an earlier mapping stage) exploring how a mechanism might work, for whom and in what circumstances.
A first stage of the synthesis process will involve extraction of key information and collation. Popayet al. (2006) describe this essential stage of description, tabulation and grouping and clustering as the preliminary synthesis. All papers will be read by at least two members of the team. Descriptive information about the papers (country of origin, year of publication, disciplinary background etc.) will be recorded and reported in a tabular form.
The second stage of the synthesis process seeks to move beyond simple descriptions. The methods will be driven by both the question we seek to address and the type of research data identified. For example, our understanding from previous research in the field is that the literature on the impact of research engagement on performance is not as established or formalised as the diffusion of innovation literature, and hence establishing meta narratives as a product of the synthesis may not be possible. If key theories (or competing theories) emerge from the early stages of the review it might be possible to use a Realist approach. Using key themes identified at earlier stages of the process of synthesis (see stage 2–3), Realist Synthesis begins by identifying key theories underpinning an intervention or practice. The process of synthesis gradually refines these theories using the research data. Where there are competing theories, the research data are used to settle disputes about which will best support policy change in which circumstances. For example, for this review, the theory of absorptive capacity or learning organizations might be interrogated. If this is not possible, we will use a more conventional thematic analysis, coding themes that arise from the studies that are relevant to the review questions. We would be happy to discuss with the SDO the question of which approach would be best to adopt. The analysis will also explore conflicting findings and findings that do not fit within the emerging analysis. We would also use concept mapping to represent visually the links between individual studies using diagrams and flow charts to construct a model of key themes relevant to the review question. Popayet al. (2006) suggest a final phase to the synthesis process which involves an assessment of the robustness of the synthesis. This opportunity for critical reflection will be particularly valuable given the likely complexity of the synthesis involved in the proposed review and the continuing debate within the RAMESES project and elsewhere about approaches to research synthesis.
6. Producing recommendations
Although we will encourage ongoing involvement of stakeholders in the review, the final phase of the review process is dedicated to reflection and consultation with the intended users of the review and the advisory group in order to develop meaningful recommendations for policy and practice. This stage will involve a distillation of key messages from the research (and a consideration of how these relate to other relevant information and the production of recommendations for policy, practice and research). We will also seek out opportunities to discuss these recommendations with potential users and the advisory group, via a meeting/conference call to discuss and validate the emerging recommendations. The liaison with potential users of the review could continue after the project has been formally completed.
5. Contribution of existing research
Given that this study is a synthesis of the evidence it will focus to a considerable extent on analysing and synthesising the contribution of existing research. In this section, therefore, we provide an outline of some of the bodies of knowledge to be reviewed that are additional to those on research engagement and improved performance that have already been described in the Background section. This additional literature covers a range of potentially relevant and sometimes overlapping topics, some of which are likely to more relevant at one level of performance in healthcare organisations than others. Much of this comes from the health field and includes studies on: organisational process, learning organisations, the role of medical academics, medical education and continuing professional development, critical appraisal, quality improvement (QI), evidence-based medicine (EBM), medical engagement, and the diffusions of innovations. Furthermore, as with the notion of an ‘absorptive capacity’, some of the concepts and theories used in this diverse body of work originated elsewhere, often from studies of the role that research engagement plays in industry and in commercial firms. However in terms of addressing the questions set out above, this literature currently resembles an unsolved jigsaw puzzle, and although some of it does try to indicate how links can be made between research engagement and performance, in other bodies of literature the links are more tangential. We will need to interrogate and arrange the various bodies of knowledge carefully before a clear picture, or evidence synthesis, can emerge. We start with a very brief account of some of the literature from studies about research in industry.
Learning from research on industry The specific term ‘absorptive capacity’ was referred to above because it was directly incorporated into assessments of health research such as the payback framework. In this sub-section we consider other literature that sometimes covers similar themes but relates to broader movements. In 2004 the UK Government published the ten-year Science and Innovation Investment Framework (SIIF), making a strong case for a strong public science base to support improvements in welfare and seeing the outputs from the science base as new knowledge, skilled people, new methodologies, and new networks. This Framework provided part of the context for the 2006 DH R&D strategy: “Best research for best health”, mentioned above. The SIIF was based on a wide-ranging review of the available evidence, and this included substantial evidence from industry. One of the key lessons for this particular review from this body of work is the importance to firms of effective in-house R&D, and here the evidence includes some historical analysis with the claim, for example, that “German industry in the 1870s had already established the new pattern of in-house R&D leading to the introduction of new products and new processes” (Freeman and Soete, 2000).
Organisational process Organisational process has been described as the missing link between research and practice. For example, Rosenheck builds on a 1999 report by the US National Institute of Mental Health entitled ‘Bridging Science and Services’ to describe how the operational characteristics of large, complex organizations and strategies have been used to facilitate implementation of innovative programmes (Rosenheck, 2001). His observations were largely descriptive and had not been subject to systematic empirical research but this gap is now being addressed. In one paper Belkhodja and colleagues report on the organisational determinants of research utilisation in Canadian Health Services Organisations and confirm the importance of the linkage mechanisms, research experience, unit size, and research relevance for the users (Belkhodja, 2007). Part of the more general literature on innovation (within firms, nationally and internationally) is also relevant to this discussion. For example, a recent paper links self-directed learning, organisational learning, knowledge management capabilities to organisational performance in 21 technology companies (Ho, 2008).
Learning organisations The literature on learning organisations is again somewhat overlapping with the above concerns, and ranging much more widely than just healthcare. But it is seen as a important concept within attempts to analyse and encourage the use of knowledge advances within healthcare organisations (Davies et al. , 2000). And the concepts can be specifically linked to the use of research (Nutleyet al. , 2007).
Knowledge transfer and mobilisation and research utilisation There are increasing attempts within the healthcare and health research systems to encourage the use of knowledge. This again could be seen as a link between research and improving healthcare. Crillyet al. (2010) review a large body of relevant literature from both the health and management fields.
The role of clinician scientists, or medical academics, in translational research There are a number of papers focusing on the particular role that physician (or clinician scientists) might play as catalysts for translational research (Archer, 2007; Bevan, 2007; Chung et al. , 2007). Liang (2003) describes the potential role as follows: ‘To the clinician scientist studying healthcare it is to translate the promise of contemporary molecular medicine and the social and behavioural sciences and bring them inseparably to the intelligence, sensitivity and morality of the master clinician; and to translate the promise of medicine into actual healthcare for all and to the most vulnerable among us.’(Liang, 2003: 721). This literature tends to focus on the considerable potential in encouraging physician scientists. Eckert et al. (2006) describe the increasing numbers of physician scientists as a win-win situation, although it has been acknowledged that it will take some time to see the results (Bevan, 2007). The critical factors required to encourage physician scientists include early training opportunities and opportunities for physicians to do PhDs (Bevan, 2007). In contrast, Chung (2007) focuses on the decline in the number of physician scientists and cites their long training, lack of mentors, income issues and other responsibilities as barriers. Reports on such issues in the UK from the Academy of Medical Sciences (2003) were a major factor behind the new R&D strategy developed by the DH (DH, 2006).
This stream of thinking has some parallels with analyses of the benefits in the field of education from partnerships and interactions between researchers and research users (Huberman, 1993; Cousins and Simon, 1996), and the benefits of practitioners as researchers (Fox, 2003; Stenhouse, 1979). A House of Lords report into Allergy services in the UK highlighted the role of the MRC-Asthma UK Centre in Allergic Mechanisms of Asthma in fostering translational research (House of Lords, 2007). In the healthcare field there are some particularly striking examples of care provided by clinicians who implement their own cutting edge research that is of high quality and improves the care provided. In some instances this can be in fields where there are a small number of cases of the particular illness, and the care provided by the leading medical academic is at the national centre for the disease, or at one of a small number of specialist centres. In these cases quite a high proportion of the patients with that specific illness may end up receiving this high quality treatment.
Medical education, continuing professional development and critical appraisal The medical education literature includes various analyses of the types of engagement in research that are possible for clinicians, and also sets out why it is seen as beneficial for clinicians. In some of the literature the emphasis is on doing research, as in a recent review of programmes to engage medical students in research in two US universities (Laskowitz et al. , 2010). Other literature, including regular articles in the BMJ, focuses on medical education (as in various assessments of EBM and critical appraisal courses). There have been some attempts in the HERG payback stream of work (Buxtonet al. , 2000) and elsewhere to see how far critical appraisal courses contribute to improved healthcare. As part of making this review as relevant as possible to the needs of the NHS we shall consider reports on issues relevant for different groups of healthcare professionals who are being encouraged, for example, to undertake various types of higher degrees. There are also overlaps between consideration of medical education and the literature on the role of medical academics, highlighted in various reports in the 2000s (Academy of Medical Sciences, 2003).
Evidence-based medicine The literature on EBM is extensive and it overlaps with the some of the work on assessing the impacts of health research as it represents a very explicit attempt to get research evidence used to improve healthcare performance. There was a considerable growth of interest in this approach in the 1990s following publication of a paper by the Evidence-Based Medicine Working Group in 1992, but later there were debates about how far the initial approaches advocated for getting evidence into practice were making an important impact (Coomarasamy and Khan, 2004; Gabbay and le May, 2004; Sheldonet al. , 2004; McCulloch, 2004).
Quality improvement There is a considerable literature on how best to improve quality in healthcare organisations, and SH and BS also have recent practical experience as part of a team evaluating two large Health Foundation schemes involving seventeen projects in total (Soperet al. , 2008; Linget al. , 2010). Key recommendations from this work highlight the importance of recognising the various levels and issues that can be involved in improved performance within health care organisations: ‘QI should be part of the education, training and appraisal of health professionals. This not only concerns ‘heroic’ leadership but also dispersed leadership and the ability to maintain effective dialogue with managers, service users and other clinicians.’ (Linget al. , 2010). We will interrogate this literature to see what has been said here about engagement in research as a contribution to improving quality. An initial scan of the literature suggests that much of it might not relate to research engagement at all, but that there will be some studies that have relevant points to make, such as the evaluation of the US Quality Enhancement Research Initiative (QUERI) programme (Graham et al. , 2009).
Medical engagement A correlation between medical engagement and service improvement has been demonstrated in two recent studies cited in the ITT (NHS Institute for Innovation and Improvement and Academy of Medical Royal Colleges, 2008; Mannionet al. , 2010). While the impact of research engagement on service improvement was not specifically explored in either study, the report by Mannion et al. does note the motivational force of maintaining or developing a reputation as a high status organisation, describing it as a key influence on the formation of a highly developed performance management culture within an organisation. And, importantly, reputation here is interpreted not just as service excellence but as national and international academic and research excellence (Mannionet al. , 2010, p. 206) A similar picture emerged from the review of the impacts of the NIHR-funded Biomedical Research Units undertaken by BS with RAND Europe (Marjanovicet al. , 2009). We will seek to build on and further explore these insights.
The diffusion of innovations The many bodies of research that contribute to explaining the diffusion of innovations have been reviewed for the SDO by Trisha Greenhalgh and her colleagues (Greenhalghet al. , 2004, 2005). For the current synthesis it will be highly relevant to explore how far notions of research engagement were seen as important in the literature reviewed by Greenhalgh et al. An initial review might suggest that research engagement plays a limited direct role, but some of the research traditions included in Greenhalgh et al's review form part of the literature described above. Some articles on the spread of innovations, or technologies, in healthcare that were included in that review, and later ones, will be examined. For example, Ferlieet al. (2005) conducted two qualitative studies in the UK health care sector to trace eight purposefully selected innovations. Complex, contested, and nonlinear innovation careers emerged. Developing the nonlinear perspective on innovation spread further, they theorised that multi-professionalisation shapes “nonspread”, i.e. social and cognitive boundaries between different professions retard spread as individual professionals operate within uni-disciplinary communities of practice.
It is clear from this brief initial review that a large and highly diverse literature can be identified that might have some relevance for the question of whether engagement in research improves performance of and within healthcare organisations. But to be of value to the NHS it will be necessary to add to the body of knowledge by identifying and synthesising those parts of it that can help address the question. The ITT notes that the evidence synthesis should map and explore the likely or plausible mechanisms through which research engagement might bear on wider performance at each level. Hanneyet al. (2010) identified a series of mechanisms that assist the English DH's research system meet the needs of the various stakeholders. Such a list provides a starting point for identifying the mechanisms through which research engagement might improve the performance of healthcare organisations. In line with the realist approach to synthesis that will inform parts of our study, we believe that it is also vital to consider the various contexts in which these mechanisms are used.
6. Plan of Investigation
Pre-project preparatory work
Initially in this phase we will make the necessary ethics arrangements; recruited the research assistant; and finalised the membership of the advisory group. This phase will also include an intense process of scanning the very wide field to consolidate our initial understanding of the issues, the conceptual difficulty of which is compounded by the wide scope. We shall begin the process of consultation.
Month 1: Planning phase
Starting with the first formal team meeting we will undertake a ‘conceptual brainstorming exercise’ to attempt to identify the pieces, or at least the edges, of the jigsaw puzzle of literature described above as being potentially relevant to addressing the series of questions about whether research engagement (of various types) improves performance of healthcare organisations (at various levels). The immediate purpose of this will be develop a search strategy for the review. Some discussions will be had with selected experts, both service user members of the advisory group and more widely. At the end of month 1 we will have developed a draft search strategy to be shared with the advisory group.
Milestone: draft search strategy.
Month 2: Searching phase (initial search)
We will collate comments from the advisory group members. We will divide the research areas to be searched among the team, and will conduct the initial, more informal searches and informal networking. Once responses have been collated and analysed the team will meet to make key decisions about the search strategy. In collaboration with the information scientists we will have refined the search strategy by the end of month 2. This will inevitably be challenging because we shall want to include as much of the literature as is potentially relevant despite the danger that the search terms identified from such a diverse body of knowledge will be too numerous and broad.
Milestone: Refined search strategy.
Month 3: Data collection
In month three we will begin to undertake the data collection. The search strategy for the more systematic review will include three elements: database searches, web searches and expert contacts.
-
Databases, key words, inclusion and exclusion criteria will be established during the mapping stage of the review. However, the bibliographic searches are likely to be conducted of the following databases (ASSIA, British Nursing Index, CINAHL, EMBASE, HMIC, IBSS, Medline, PsychInfo, Sociological Abstracts, SIGLE - the grey literature database) from 1980–2010.
-
In addition an internet search will be conducted. Key internet sources would include the Department of Health website, NIHR website, World Health Organization and the websites of a number of Canadian health research organizations that have conducted research in this area, and a search of relevant library catalogues including the University of Birmingham Centre for Health Services Management library.
-
References from article reference lists and recommendations will also be used to identify papers. In particular, we will consult with the expert advisory group in order to access any unpublished work and any new or ongoing research. Initial sifting will be based on titles and abstracts, with full papers and publications obtained where there is any uncertainty re inclusion.
A data sheet will be completed for all papers and publications included in the review. The papers will be coded using appropriate software (e.g. Nvivo). All papers will be read by at least two members of the team. Descriptive information about the papers (country of origin, year of publication, disciplinary background etc.) will be recorded and reported.
Milestone: Data searches.
Month 4: Continue data collection and plan mapping phase
The data collection will continue, but also in month 4 the team will meet to plan the mapping exercise. The mapping stage will seek to capture this wider literature on, for example, types of involvement, levels of engagement, types of people, teams and organizations, types of research undertaken, when and how professionals are involved.
Month 5: Conduct mapping
Month 5 will be the main phase of the mapping exercise. Further meetings might be held with relevant members of the steering group. Some specific consultancy work might be commissioned at this stage from some advisory group members.
Month 6: Continue mapping and plan more focused review
The mapping exercise will continue but towards the beginning of month 6 the team will meet to discuss the shape of the map that is emerging and how it should be drafted. This map will be used initially to stimulate discussion within the team and with the advisory group and funders in order to develop a focused review question for a more rigorous and focused review. Following the brief, a question of interest would address whether different types of research engagement are effective in influencing practice. However, this topic currently contains a number of unknowns. The mapping stage will clarify the range of ways in which healthcare organisations can be engaged in research and the types of change they might have on practice. As part of this we shall also need to consider the mechanisms through which improvements might be made. We shall also need to consider how the various types of research engagement could potentially lead to improved performance at various levels of the healthcare system. Following the question identification we will run further database searches.
Milestone: Draft map sent for comment. Further database searches.
Month 7: Appraisal stage and synthesis planning
All papers for inclusion in the final review will be critically appraised. Team members will hold an initial meeting to review comments on the map and plan the synthesis of the evidence gathered. The data analysis will continue throughout month 7. A further meeting will be held to review progress on the analysis and plan the synthesis (including selecting an approach to fit both the research question and emerging data) and how it will be reported.
Milestone: completed critical appraisals.
Month 8: Synthesis
The first stage of the synthesis will involve tabulation and description of the included studies. The second stage will be conducted using either meta-narratives, Realist Synthesis or a thematic approach combined with conceptual maps. Towards the end of the month, the synthesis report will be drafted and sent to the advisory group and the funder for review.
Milestone: Draft evidence synthesis sent for comment.
Month 9: Review, preparing final report and dissemination
The comments on the draft report will be collated. We will hold a conference call/meeting (for those members of the advisory group able to attend) to discuss the draft synthesis. If time and resources allow it we would hope to involve users in this and make it an opportunity to discuss and validate the emerging recommendations. The final drafting of the map will be held back until this stage so that insights from the synthesis can be fed back into it. The synthesis report will be revised in the light of the comments received and will be disseminated as described below.
Milestone: Completed evidence synthesis report and map.
Post project dissemination work
The team will be available to discuss the recommendations emerging from the review with potential users, but we shall also develop a proactive dissemination plan depending on the precise nature of the findings. As appropriate, the team will seek opportunities to present findings to policy and practice communities, both in general and in a more targeted way. In order to share the findings widely, including with the research community, these outputs will be: written up as one or more academic journal articles; presented at conferences where opportunities arise; and made available through the HERG, KCL and SDO websites. A further possibility that would very much depend on what emerges from the review, is that if especially interesting studies are identified from the international literature it might be worth exploring whether their authors could be invited to contribute to an event such as an SDO conference.
The team have existing networks which are ideally suited to mobilizing interest in the findings, and potentially uptake of any recommendations. These include links with specific organisations and individuals. In the last three years a number of new schemes have been established by NIHR which aim to increase collaboration between healthcare researchers and NHS staff and users, and between universities and NHS organisations, and to develop research capacities. These include the Biomedical Research Centres and Units and the CLAHRCs. These new entities present real opportunities to use the outputs of this review in a timely fashion. For example, the establishment of the CLAHRCs is a pioneering attempt to develop mechanism to achieve improvements in healthcare performance through wider engagement of healthcare organisations in research. SH and BS are members of the one of the teams that successfully applied to the SDO to conduct the formative evaluation of the whole CLAHRCs initiative. The director of one of the CLAHRCs is a member of the proposed advisory group for this current study. Therefore, the applicants on this proposal are likely to have various opportunities to discuss the findings of this evidence synthesis with leading members of the CLAHRCs initiative. AB works within two of the Biomedical Research Centres based at King's College London and is ideally placed to feed the findings into the wider network of BRCs and BRUs. Additionally, other members of the advisory group will be well placed to assist with dissemination of the findings to key target audiences.
Furthermore, from previous discussions, and research activities, members of the research team know of various key players with the UK health research system who have expressed an interest in the potential benefits from there being widespread engagement in research activities by staff of healthcare organisation. We would aim to disseminate the findings to them both formally and informally. Finally, various members of the advisory group would also be well placed to help with dissemination of the findings to international audiences. Through Dr Tikki Pang, such audiences will include WHO. With the planned World Health Report 2012 being on the importance and benefits of health research (Pang and Terry, 2011), WHO is currently taking a particular interest in issues to do with health research systems.
7. Project Management
Steve Hanney and Annette Boaz will share project management activities, and they will liaise regularly. Given the complexity of the literature to be reviewed they both intend to devote 20% FTE time to the project. SH will have overall responsibility for project management and monitoring of progress. He will be particularly responsible for the work being conducted by the members of the team from HERG, including the Research Assistant who will undertake many of the basic tasks of gathering and collating data. AB will be responsible, in particular, for monitoring the progress of the work being conducted at King's, including the input of the information specialists who will run the database search, as described below.
A virtual project advisory group will be established to provide expert input at various points about a range of issues, including the methods being used in the review, the literature being identified, and the findings emerging from the synthesis. Most of the input from the virtual advisory group is expected to be by email, but it is hoped to organise one meeting/conference call towards the end of the project. The membership of the advisory group is listed separately in the Management Plan.
8. Service users/public involvement
We have taken advice from the London Research Design Service (RDS) lead on Patient and Public Involvement on how to engage patients and the public in this synthesis. We have invited two individuals with wide experience of representing the patient/public perspective to join the project's virtual advisory group, and consultation with them begins in the early stages of the project.
Whilst most of the involvement of members of the advisory group is intended to be via email commentary on progress reports sent round by the project team, we also hope to talk individually to some members of the group to gain their particular perspective. The two service user advisors will join the team for the first formal project meeting in September to meet the team, learn about the planned research and discuss their contribution of the project.
9. References
- Strengthening Clinical Research. London: Academy of Medical Sciences; 2003.
- Antil T, Desrochers P., Joubert, Bouchard C. Implementation of an innovative grant programme to build partnerships between researchers, decision-makers and practitioners: the experience of the Quebec Social Research Council. Research &Amp; Policy 2003;8:S2:35-43.
- Archer S. L. The making of a physician-scientist-the process has a pattern: lessons from the lives of Nobel laureates in medicine and physiology. European Heart Journal 2007;28:510-4.
- Arksey H., O’Malley L. Scoping Studies: Towards a Methodological Framework. Int J Social Research Methodology 2005;8:19-32.
- Becker W., Peters J. Technological Opportunities, Absorptive Capacities, and Innovation. Universitaet Augsburg; 2000.
- Belkhodja O., Amara N., Landry R., Ouimet M. The extent and organizational determinants of research utilization in Canadian health services organizations. Science Communication 2007;28:377-41.
- Bevan J. C. Ethics in research. Curr Opin Anaesthesiol 2007;20:130-6.
- Black N. National strategy for research and development: Lessons from England. Annu Rev Public Health 1997;18:485-50.
- Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: a literature review. Science and Public Policy 2009;36:255-70.
- Boaz A., Gilbert N. From Postgraduate to Social Scientist: A Guide to Key Skills. London: Sage; 2006.
- Boaz A., Ashby D., Denyer D., Egan M., Harden A., Jones D., et al. A multitude of syntheses: a comparison of five app roaches from diverse policy fields. Evidence and Policy 2006;2:479-502.
- Boaz A., Pawson R. The perilous road from evidence to policy: Five journeys compared. Journal of Social Policy 2005;34:175-94.
- Bowen S., Martens P. The Need to Know Team, 'Demystifying knowledge translation: learning from the community. Services Research &Amp; Policy 2005;10:203-11.
- Buxton M., Hanney S. Assessing Payback from Department of Health Research and Development: Preliminary Report. Vol 1: The Main Report. HERG: Brunel University Uxbridge; 1994.
- Buxton M., Hanney S. How can payback from health services research be assessed?. J Health Serv Res Policy 1996;1:35-43.
- Buxton M., Hanney S. Evaluating the NHS Research and Development Programme: Will the programme give value for money?. Journal of the Royal Society of Medicine 1998;91:2-6.
- Buxton M., Hanney S., Morris, Sundmacher L., Mestre-Ferrandiz J., Garau M., et al. Medical Research – What's it worth? Estimating the economic benefits from medical research in the UK. London: UK Evaluation Forum (Academy of Medical Sciences, MRC, Wellcome Trust); 2008.
- Buxton M., Hanney S., Packwood T., Roberts S., Youll P. Assessing benefits from Department of Health and National Health Service Research and Development. Public Money Manag 2000;20:29-34.
- Callon M. Is Science a Public Good – 5th Mullins Lecture, Virginia-Polytechnic-Institute, 23 March 1993. Science Technology &Amp; Human 1994;19:395-424.
- Chung K. C. Revitalizing the training of clinical scientists in surgery. Plastic and Reconstructive Surgery 2007;120:2066-72.
- Cohen W., Levinthal D. Innovation and Learning: the two faces of R&D. Economic J 1989;99:569-96.
- Cooksey D. A Review of UK Health Research Funding. London: HM Treasury, UK Government; 2006.
- Coomarasamy A, Khan K. What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ 2004;329:1017-21.
- Cousins J. B., Simon M. The nature and impact of policy-induced partnerships between research and practice communities. Educational Evaluation and Policy Analysis 1996;18:199-218.
- Crilly T., Jashapara A., Ferlie E. Research Utilisation & Knowledge Mobilisation: A Scoping Review of the Literature. Report to NIHR SDO Programme 2010.
- Davies H., Nutley S. Developing Learning Organisations in the new HHS. BMJ 2000;320:998-1001.
- Denis J., Lomas J. Convergent evolution: the academic and policy roots of collaborative research. J Health Serv Res Policy 2003;8:1-6.
- Research for Health. A Research and Development Strategy for the NHS. London: HMSO; 1991.
- Research for Health. London: Department of Health; 1993.
- Best Research for Best Health: A New National Health Research Strategy – The NHS Contribution to Health Research in England. London: Research and Development Directorate, Department of Health; 2006.
- Dixon-Woods A., Sutton A., Shaw A., Miller T., Smith J., Young B., et al. Appraising qualitative research for inclusion in systematic reviews: a quantitative and qualitative comparison of three methods. J Health Serv Res Policy 2007;12:42-7.
- Dixon-Woods M., Shaw R., Agarwal S., Smith J. The problem of app raising qualitative research. Quality and Safety in Healthcare 2004;13:223-5.
- Eccles M. P., Armstrong D., Baker R., K., Davies H., Davies S., et al. An implementation research agenda. Implementation Science 2009;4.
- Elliott H., Popay J. How are policy makers using evidence? Models of research utilisation and local NHS policy making. Journal of Epidemiology and Community Health 2000;54:461-8.
- Elliot R, Hanney S., Buxton M, Elliott R, Hanney S, Henkel M, et al. Assessing Payback from Department of Health Research and Development: Preliminary report. Vol. 2. HERG Research Report No. 19. Uxbridge: Health Economics Research Group, Brunel University; 1994.
- Evidence-Based Medicine Working Group . Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992;268:2420-5.
- Ferguson B., Kelly P., Georgiou A., Barnes G., Sutherland B., Woodbridge B. Assessing payback from NHS reactive research programmes. J Manag Med 2000;14:25-36.
- Ferlie E., Fitzgerald L., Wood M., Hawkins C. The (Non) Spread of Innovations: The Mediating Role of Professionals. Academy of Management Journal 2005;48:117-34.
- Fox N. J. Practice-based evidence: Towards collaborative and transgressive research. Sociology-the Journal of the British Sociological Association 2003;37:81-102.
- Frank C., Nason E. Health research: measuring the social, health and economic benefits. Canadian Medical Association Journal 2009;180:528-34.
- Freeman C., Soete L. The Economics of Industrial Innovation. Boston: MIT Press; 2000.
- Gabbay J, Le May A. Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. BMJ 2004;329:1013-7.
- Gibbons M., Limoges C., Nowotny H., Schwartzman S., Scott P., Trow M. The New Production of Knowledge. London: Sage; 1994.
- Goddard M., Davies H., Dawson D., Mannion R., McInnes F. Clinical performance measurement: part 1–getting the best out of it. Journal of the Royal Society of Medicine 2002;95:508-10.
- Goddard M., Mannion R. The role of horizontal and vertical app roaches to performance measurement and improvement in the UK public sector. Public Performance &Amp; Management Review 2004;28:75-9.
- Graham I. D., Tetroe J. Learning from the U.S. Department of Veterans Affairs Quality Enhancement Research Initiative: QUERI Series. Implementation Science 2009;4.
- Greenhalgh T., Robert G., Macfarlane F., Bate P., Kyriakidou O. Diffusion of Innovations in service organisations: systematic review and recommendations. Milbank Q 2004;82:581-629.
- Greenhalgh T., Robert G., Macfarlane, Bate P., Kyriakidou O., Peacock R. Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review. &Amp; Medicine 2005;61:417-30.
- Ham C., Dickinson. Academy of Medical Royal Colleges, NHS Institute, University of Birmingham; 2008.
- Hanney S., Buxton M., Green C., Coulson D., Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess 2007;11.
- Hanney S., Kuruvilla S., Soper B., Mays N. Who needs what from a national health research system: lessons from reforms to the English Department of Health's R&D system. Health Research Policy and Systems 2010;8.
- Hanney S., Mugford M., Grant J., Buxton M. Assessing the benefits of health research: lessons from research into the use of antenatal corticosteroids for the prevention of neonatal respiratory distress syndrome. Social Science &Amp; Medicine 2005;60:937-4.
- Hanney S., Packwood T., Buxton M. Evaluating the benefits from health R&D centres: a categorisation, a model and examples of application. Evaluation: the International Journal of Theory, Research and Practice 2000;6:137-60.
- Hanney S. R., Home P. D., Frame I., Grant J., Green P., Buxton M. J. Identifying the impact of diabetes research. Diabetic Medicine 2006;23:176-84.
- Hanney S, Grant J, Wooding S, Buxton M. Proposed methods for reviewing the outcomes of research: the impact of funding by the UK's ‘Arthritis Research Campaign’. Health Research Policy and Systems 2004.
- Harrison A, New B. Public Interest, Private Decisions: Health-related Research in the UK. London: King's Fund; 2002.
- Ho L.-A. What affects organisational performance? The linking of learning and knowledge management. Management &Amp; Data Systems 2008:1234-5.
- House of Lords Select Committee on Science and Technology . HL Paper 54-1 1988.
- Kalra L, Evans A, Perez I, Knapp M, Donaldson N, Swift C. Alternative strategies for stroke care: a prospective randomised controlled trial. Lancet 2000;356:894-9.
- Knudsen H. K., Roman P. M. Modeling the use of innovations in private treatment organizations: The role of absorptive capacity. J Subst Abuse Treat 2004;26:353-61.
- Kogan M., Henkel M. Government and Research. London: Heinemann; 1983.
- Kogan M., Henkel M., Hanney S. Government and Research: Thirty Years of Evolution. Dordrecht: Springer; 2006.
- Laskowitz D. Engaging students in dedicated research and scholarship during medical school: the long-term experiences at Duke and Stanford'. Acad Med 2010;85:419-28.
- Liang M. H. Translational research: Getting the word and the meaning right. Arthritis &Amp; 2003;49:720-1.
- Liggins G. C., Howie R. N. A controlled trial of antepartum glucocorticoid treatment for prevention of the respiratory distress syndrome in premature infants. Pediatrics 1972;50:515-2.
- Ling T., Soper B., Buxton, Hanney S., Oortwijn, Scoggins A., et al. An Evaluation of The Health Foundation's Engaging with Quality Initiative. London: The Health Foundation; 2010.
- Lomas J. Finding audiences, changing beliefs: the structure of research use in Canadian health policy. J Health Polit Policy Law 1990;15:525-42.
- Lomas J. Using ‘Linkage and Exchange’ to move research into policy at a Canadian foundation. Health Aff 2000;19:236-40.
- Long A., Grayson L., Boaz A. Assessing the quality of knowledge in social care: exploring the potential of a set of generic standards. British Journal of Social Work 2006;26:207-26.
- McCulloch P. Half full or half empty VATS?. BMJ 2004;329.
- Majumdar SR, Roe MT, Peterson ED, Chen AY, Gibler WB, Armstrong PW. Better outcomes for patients treated at hospitals that participate in clinical trials. Arch Intern Med 2008;168:657-62.
- Mannion R., Davies H., Harrrison S. Changing Management Cultures and Organisational Performance in the NHS (OC2) 2010. URL: www.sdo.nihr.ac.uk/files/project/94-final-report.pdf.
- Mannion R., Davies H., Marshall M. Impact of star performance ratings in English acute hospital trusts. Research &Amp; Policy 2005;10:18-24.
- Marjanovic S, Soper B, Shehabi A, Celia C, Reding A, Ling T. Changing the Translational Research Landscape: A Review of the Impacts of Biomedical Research Centres in England. Cambridge: RAND Europe; 2009.
- Moscucci O., Herring R., Berridge V. Networking health research in Britain: The post-war childhood leukaemia trials'. Twentieth Century British History 2009;20:23-52.
- Delivering Health Research. London: DoH; 2009.
- Nowotny H., Scott P., Gibbons M. Re-thinking Science: Knowledge and the Public in an Age of Uncertainty. Oxford: Policy Press; 2001.
- Nutley S., Walter I., Davies H. Using Evidence: How research can inform public service. Bristol, UK: The Policy Press, University of Bristol; 2007.
- Chairman's First Progress Report. The Stationery Office Norwich; 2008.
- Pang T., Sadana R., Hanney S., Bhutta Z., Hyder A., Simon J. Knowledge for better health – a conceptual framework and foundation for health research systems. Bull World Health Organ 2003;81:815-20.
- Pang T, Terry RF. The PloS Medicine Editors . WHO/PloS Collection “No Health Without Research”: A Call for Papers. PloS Med 2011;8.
- Peckham M. Research and development for the National Health Service. Lancet 1991;338:367-71.
- Peckham S., Willmott M., Allen P., Anderson S., Goodwin N. Assessing the impact of the NHS Service Delivery and Organisation Research and Development Programme. &Amp; Policy 2008;4:313-30.
- Pettigrew M., Roberts H. Systematic Reviews in the Social Sciences: A Practical Guide. Oxford: Blackwell; 2007.
- Popay J., Roberts H., Sowden A., Petticrew M., Arai L., Rodgers M., et al. A product from the ESRC Methods Programme. Swindon: ESRC; 2006.
- Rosenburg N. Why do firms do basic research (with their own money)?. Research Policy 1990;19:165-74.
- Rosenheck R. Organisational Process: a missing link between research and practice. Psychiatric Services 2001;52.
- Sheldon T, Cullum N, Dawson D, Lankshear A, Lowson K, Watt, et al. What's the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients' notes, and interviews. BMJ 2004;329:999-1007.
- Shergold M., Grant J. Freedom and need: The evolution of public strategy for biomedical and health research in England. Health Res Policy Syst 2008;6.
- Soper B., Buxton M., Hanney S., Oortwijn W., Scoggins A., Steel N., et al. Developing the protocol for the evaluation of the health foundation's ‘engaging with quality initiative’ – an emergent approach. Implementation Science 2008;3.
- Soper B., Hanney S. Lessons from the evaluation of the UK's NHS R&D Implementation Methods Programme. Implementation Science 2007;2.
- Tetroe J. M., Graham I. D., Foy R., Robinson N., Eccles M. P., Wensing M., et al. Health research funding agencies' support and promotion of knowledge translation: An international study. Milbank Quarterly 2008;86:125-5.
- von Walden Laing D, Hanney S, Buxton M. HERG Research Report No 24. Uxbridge: HERG, Brunel University; 1997.
- Walshe K., Davies H. Research, influence and impact: Deconstructing the norms of health services research commissioning. Policy and Society 2010;29:103-11.
- Wisely J. National R&Amp;D Programme in 2001.
- Wisely J. the National R&Amp;D Programme on Primary Secondary Care 2001.
- Wooding S., Hanney S., Buxton M., Grant J. Payback arising from research funding: evaluation of the Arthritis Research Campaign. Rheumatology 2005;44:1145-56.
- Wooding S., Hanney S., Pollitt A., Buxton M., Grant J. Project Retrosight. Understanding the returns from cardiovascular and stroke research: Policy Report. Cambridge, UK: RAND Europe; 2011.
- Zahra S. A., George G. Absorptive capacity: A review, reconceptualization, and extension. Academy of Management Review 2002;27:185-203.
- This protocol refers to independent research commissioned by the National Institute for Health Research (NIHR). Any views and opinions expressed therein are those of the authors and do not necessarily reflect those of the NHS, the NIHR, the SDO programme or the Department of Health.
Appendix 2 Search strategy – MEDLINE
Database: Ovid MEDLINE (R) In-Process & Other Non-Indexed Citations and Ovid MEDLINE (R) <1946 to present>
Date run
6 March 2012.
Limits
Search dates: 1990 to current.
Search fields: title and abstract.
Languages: all.
Publication type: case reports; classical article; clinical conference; clinical trial; controlled clinical trial; corrected and republished article; evaluation studies; festschrift; government publications; guideline; journal article; meta-analysis; multicenter study; review; practice guideline; published erratum; RCT; review of reported cases; research support – all; technical report; validation studies.
Search strategy
-
(engag$ adj2 research$).ti,ab. OR (engag$ adj2 trial?).ti,ab. OR (engag$ adj2 case stud$).ti,ab. OR (engag$ adj2 clinical stud$).ti,ab. OR (engag$ adj2 experimental therap$).ti,ab. OR (engag$ adj2 RCT?).ti,ab. OR (engag$ adj2 randomi?ed controlled trial?).ti,ab. OR (engag$ adj4 clinical trial?).ti,ab. OR (participat$ adj4 research$).ti,ab. OR (participat$ adj4 trial?).ti,ab. OR (participat$ adj4 case stud$).ti,ab. OR (participat$ adj4 clinical stud$).ti,ab. OR (participat$ adj4 experimental therap$).ti,ab. OR (participat$ adj4 RCT?).ti,ab. OR (participat$ adj2 randomi?ed controlled trial?).ti,ab. OR (participat$ adj4 clinical trial?).ti,ab. OR (involv$ adj2 research$).ti,ab. OR (involv$ adj2 trial?).ti,ab. OR (involv$ adj2 case stud$).ti,ab. OR (involv$ adj2 clinical stud$).ti,ab. OR (involv$ adj2 experimental therap$).ti,ab. OR (involv$ adj2 RCT?).ti,ab. OR (involv$ adj2 randomi?ed controlled trial?).ti,ab. OR (involv$ adj4 clinical trial?).ti,ab. OR (interact$ adj2 research$).ti,ab. OR (interact$ adj2 trial?).ti,ab. OR (interact$ adj2 case stud$).ti,ab. OR (interact$ adj2 clinical stud$).ti,ab. OR (interact$ adj2 experimental therap$).ti,ab. OR (interact$ adj2 RCT?).ti,ab. OR (interact$ adj2 randomi?ed controlled trial?).ti,ab. OR (interact$ adj4 clinical trial?).ti,ab. OR (tak$ part adj3 research$).ti,ab. OR (tak$ part adj5 trial?).ti,ab. OR (tak$ part adj3 case stud$).ti,ab. OR (tak$ part adj3 clinical stud$).ti,ab. OR (tak$ part adj3 experimental therap$).ti,ab. OR (tak$ part adj3 RCT?).ti,ab. OR (tak$ part adj2 randomi?ed controlled trial?).ti,ab. OR (tak$ part adj4 clinical trial?).ti,ab. OR (initiat$ adj2 research$).ti,ab. OR (initiat$ adj2 trial?).ti,ab. OR (initiat$ adj2 case stud$).ti,ab. OR (initiat$ adj2 clinical stud$).ti,ab. OR (initiat$ adj2 experimental therap$).ti,ab. OR (initiat$ adj2 RCT?).ti,ab. OR (initiat$ adj2 randomi?ed controlled trial?).ti,ab. OR (initiat$ adj4 clinical trial?).ti,ab. OR (follow$ adj2 research$).ti,ab. OR (follow$ adj2 trial?).ti,ab. OR (follow$ adj2 case stud$).ti,ab. OR (follow$ adj2 clinical stud$).ti,ab. OR (follow$ adj2 experimental therap$).ti,ab. OR (follow$ adj2 RCT?).ti,ab. OR (follow$ adj2 randomi?ed controlled trial?).ti,ab. OR (follow$ adj4 clinical trial?).ti,ab. OR (introduc$ adj2 research$).ti,ab. OR (introduc$ adj2 trial?).ti,ab. OR (introduc$ adj2 case stud$).ti,ab. OR (introduc$ adj2 clinical stud$).ti,ab. OR (introduc$ adj2 experimental therap$).ti,ab. OR (introduc$ adj2 RCT?).ti,ab. OR (introduc$ adj2 randomi?ed controlled trial?).ti,ab. OR (introduc$ adj4 clinical trial?).ti,ab. OR (conduct$ adj2 research$).ti,ab. OR (conduct$ adj2 trial?).ti,ab. OR (conduct$ adj2 case stud$).ti,ab. OR (conduct$ adj2 clinical stud$).ti,ab. OR (conduct$ adj2 experimental therap$).ti,ab. OR (conduct$ adj2 RCT?).ti,ab. OR (conduct$ adj2 randomi?ed controlled trial?).ti,ab. OR (conduct$ adj4 clinical trial?).ti,ab. OR (learning organi?ation?).ti,ab. OR (research intensive organi?ation?).ti,ab. OR (academic medical centre?).ti,ab. OR (academic medical center?).ti,ab. OR (academic health science centre?).ti,ab. OR (academic health science center?).ti,ab. OR (research network?).ti,ab. OR (research collaboration?).ti,ab. OR (study hospital?).ti,ab. OR (teaching research facilities).ti,ab. OR (trial hospital?).ti,ab. OR (veterans health administration).ti,ab. (67,083)
-
((improve$).ti,ab. OR (influence$).ti,ab. OR (determine$).ti,ab. OR (affect$).ti,ab. OR (effect$).ti,ab. OR (increase$).ti,ab. OR (decrease$).ti,ab. OR (declines$).ti,ab. OR (diminish$).ti,ab. OR (weake$).ti,ab. OR (worse$).ti,ab. OR (benefi$).ti,ab. OR (impact$).ti,ab. OR (better).ti,ab. OR (worse).ti,ab. OR (greater).ti,ab. OR (lesser).ti,ab. OR (lower).ti,ab. OR (higher).ti,ab. OR (evaluat$).ti,ab. OR (compar$).ti,ab.) adj5 ((performance).ti,ab. OR (patient$ adj4 outcome?).ti,ab. OR (process quality).ti,ab. OR (process assessment?).ti,ab. OR (health care adj4 outcome?).ti,ab.OR (healthcare adj4 outcome?).ti,ab. OR (clinical adj4 outcome?).ti,ab. OR (quality adj4 care).ti,ab. OR (compar$ adj4 outcome?).ti,ab. OR (patient$ adj4 mortality).ti,ab. OR (routine adj clinical practice).ti,ab. OR (mortality adj4 outcome$).ti,ab. OR (organi?ational process$).ti,ab. OR (organi?ational determinant$).ti,ab. OR (organi?ational characteristic?).ti,ab. OR (organi?ational innovation?).ti,ab. OR (organi?ational culture).ti,ab. OR (organi?ational support).ti,ab. OR (clinical adj2 care).ti,ab. OR (treatment outcome).ti,ab. OR (adhere$ adj4 guideline?).ti,ab. OR ("use$" adj4 guideline?).ti,ab. OR (clinical practi?e).ti,ab. OR (patient$ satisfaction).ti,ab.) (7,650,724)
-
(practice adj4 change?).ti,ab. OR (service adj4 change?).ti,ab. OR (organi?ational change?).ti,ab. OR (treatment change?).ti,ab. OR (prescri$ change?).ti,ab. (713,416)
-
2 OR 3 (10,879)
-
1 AND 4 (4,804)
Appendix 3 Focused review – excluded papers
Aarons G, Sommerfeld DH, Walrath-Greene CM. Evidence-based practice implementation: the impact of public versus private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implementation Sci 2009;4:83.
Aarons GA, Glisson C, Green PD, Hoagwood K, Kelleher KJ, Landsverk JA. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: a United States national study. Implementation Sci 2012;7:56.
Abernethy AP, Barbour SY, Uronis H, Zafar SY, Coan A, Rowe K, et al. Quality management of potential chemotherapy-induced neutropenic complications: evaluation of practice in an academic medical center. Support Care Cancer 2009;17:735–44.
Aiello EJ, Buist DSM, Wagner EH, Tuzzio L, Greene SM, Lamerato LE, et al. Diffusion of aromatase inhibitors for breast cancer therapy between 1996 and 2003 in the Cancer Research Network. Breast Cancer Res Treat 2008;107:397–403.
Albert M. Shaping a learning organization through the linkage of action research interventions. Organ Dev J 1998;16:29–39.
Albert M, Fretheim A, Maïga D. Factors influencing the utilization of research findings by health policy-makers in a developing country: the selection of Mali's essential medicines. Health Res Policy Syst 2007;5:2.
Alberta Heritage Foundation for Medical Research (AHFMR). Assessment of Health Research Fund Outputs and Outcomes: 1995–2003. Edmonton, AB: Alberta Heritage Foundation for Medical Research; 2003.
Al-Khatib SM. New directions in clinical outcomes assessment: Sixth International Symposium on Interventional Electrophysiology in the Management of Cardiac Arrhythmias. J Interv Card Electrophysiol 2011;32:221–6.
Alligood MR. Theory-based practice in a major medical centre. J Nurs Manag 2011;19:981–8.
Alter DA, Naylot CD, Austin PC, Tu JV. Long-term MI outcomes at hospitals with or without on-site revascularization. JAMA 2001;285:2101–8.
Amendola MG. Empowerment: healthcare professionals’ and community members’ contributions. J Cult Div 2011;18:82–9.
Anon. Veterans Health Administration clinics provide better overall quality of care than many other sites. AHRQ Res Activities 2005;17.
Antil T, Desrochers M, Joubert P, Bouchard C. Implementation of an innovative grant programme to build partnerships between researchers, decision-makers and practitioners: the experience of the Quebec Social Research Council. J Health Serv Res Policy 2003;8:35–43.
Arbabi S, Jurkovich GJ, Rivara FP, Nathens AB, Moore M, Demarest GB et al. Patient outcomes in academic medical centers – influence of fellowship programs and in-house on-call attending surgeon. Injury 2003;138:47–51.
Arbaje AI, Maron DD, Yu Q, Wendel VI, Tanner E, Boult C, et al. The geriatric floating interdisciplinary transition team. J Am Geriatr Society 2010;58:364–70.
Armitage SK, Williams L. Liaison and continuity of nursing care. Cardiff: Temple of Peace and Health, South Glamorgan Health Authority; 1990.
Armstrong B, Levesque O, Perlin J, Rick C, Schectman G. Reinventing Veterans Health Administration: focus on Primary Care. J Healthcare Manag 2005;50:399–408, discussion 409.
Artis LC, Burkhart TM, Johnson TJ Matuszewski KA. Physician factors as an indicator of technological device adoption. J Med Syst 2006;30:177–86.
Asadoorian J, Hearson B, Satyanarayana S, Ursel J. Evidence-based practice in healthcare: an exploratory cross-discipline comparison of enhancers and barriers. J Healthcare Qual 2010;32:15–22.
Asch SM, Baker DW, Keesey JW, Broder M, Schonlau M, Rosen M, et al. Does the collaborative model improve care for chronic heart failure? Med Care 2005;43:667–75.
Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Int Med 2004;141:938–45.
Ashton CM, Bozkurt B, Colucci WB, Kiefe CI, Mann DL, Massie BM, et al. Veterans Affairs Quality Enhancement Research Initiative in chronic heart failure. Med Care 2000;38:26–37.
Asomugha CN, Derose KP, Lurie N. Faith-based organizations, science, and the pursuit of health. J Health Care Poor Underserved 2011;22:50–5.
Atwal A. Getting the evidence into practice: the challenges and successes of action research. Br J Occup Ther 2002;65:335–41.
Babl F, Borland M, Ngo P, Acworth J, Krieser D, Pandit S, et al. Paediatric Research in Emergency Departments International Collaborative (PREDICT): first steps towards the development of an Australian and New Zealand research network. Emerg Med Australas 2006;18:143–7.
Babl FE, Sheriff N, Borland M, Acworth J, Neutze J, Krieser D, et al. Paediatric acute asthma management in Australia and New Zealand: practice patterns in the context of clinical practice guidelines. Arch Dis Child 2009;93:307–12.
Badgett JT. Pediatric residents as managed care providers . . . including commentary by Macfarlane A. Ambul Child Health 1997;2:261–7.
Baessler CA, Blumberg M, Cunningham JS, Curran JA, Fennesey AG, Jacobs JM, et al. Medical-surgical nurses’ utilization of research methods and products. Medsurg Nurs 1994;3:113–41.
Bakas T, Farran CJ, Williams LS. Writing with a collaborative team. Rehabil Nurs 2006;31:222–4.
Baker R, Lecouturier J, Bond S. Explaining variation in GP referral rates for X-rays for back pain. Implementation Sci 2006;1:15.
Baker R, Robertson N, Rogers S, Davies M, Brunskill N, Khunti, K, et al. The National Institutes of Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for Leicestershire, Northamptonshire and Rutland (LNR): a programme protocol. Implementation Sci 2009;4:72.
Balakas K, Potter P, Pratt E, Rea G, Williams J. Evidence Equals Excellence: the application of an evidence-based practice model in an academic medical center. Nurs Clin North Am 2009;44:1–10.
Balcom RJ, Helmuth WV, Beaumont E, Sutton D, Vollman JH, Kantak AD, et al. The Vermont-Oxford Trials Network: very low birth weight outcomes for 1990. Pediatrics 1993;91:540–5.
Bateman H, Boase S, Tribedi D, Williams K, Gorham B. Acknowledging the contribution of nurses who support quality care through involvement in research studies. Qual Prim Care 2006;14:71–6.
Battersby M, McDonald P, Pearce R, Tolchard B, Allen K. The changing attitudes of health professionals and consumers towards a coordinated care trial – SA HealthPlus. Aust Health Rev 2001;24:172–8.
Beck C, Heacock P, Mercer SO, Doan R, O'sullivan PS, Stevenson JG, et al. Sustaining a best-care practice in a nursing home. J Healthcare Qual 2005;27:5–16.
Beckett M, Quiter E, Ryan G, Berrebi C, Taylor S, Cho M, et al. Bridging the gap between basic science and clinical practice: The role of organizations in addressing clinician barriers. Implementation Sci 2011;6:35.
Beech R, Devilliers R, Thorniley-Jones H, Welch H, Farrar S, Douglas C, et al. Promoting evidence informed service development: A study of falls services in Cheshire. Prim Health Care Res Dev 2010;11:222–32.
Behal R, Finn J. Understanding and improving inpatient mortality in academic medical centers. Acad Med 2009;84:1657–62.
Belkhodja O, Amara N, Landry R, Ouimet M. The extent and organizational determinants of research utilization in Canadian Health Services Organizations. Sci Commun 2007;28:377–417.
Bellomo R, Stow PJ, Hart GK. Why is there such a difference in outcome between Australian intensive care units and others? Curr Opin Anaesthesiol 2007;20:100–5.
Bennett IM, Coco A, Anderson J, Horst M, Gambler AS, Barr WB, et al. Improving maternal care with a continuous quality improvement strategy: a report from the Interventions to Minimize Preterm and Low Birth Weight Infants through Continuous Improvement Techniques (IMPLICIT) Network. J Am Board Fam Med 2009;22:380–6.
Berkhout AJMB, Boumans NPG, Mur I, Nijhuis FJN. Conditions for successfully implementing resident-oriented care in nursing homes. Scand J Caring Sci 2009;23:298–308.
Bernard DB, Coburn KD, Miani MA. Health and disease management within an academic health system. Dis Manag Health Outcomes 2000;7:21–37.
Berta W, Laporte A, Kachan N. Unpacking the relationship between operational efficiency and quality of care in Ontario long-term care homes. Can J Aging 2010;29:543–56.
Birkmeyer JD, Siewers AE, Finlayson EVA, Stukel TA, Lucas FL, Batista I, et al. Hospital volume and surgical mortality in the United States. N Engl J Med 2002;346:1128–37.
Birkmeyer JD, Stukel TA, Siewers AE, Goodney PP, Wennberg DE, Lucas FL. Surgeon volume and operative mortality in the United States. N Engl J Med 2003;349:2117–27.
Blevins D, Farmer MS, Edlund C, Sullivan G, Kirchner JE. Collaborative research between clinicians and researchers: a multiple case study of implementation. Implementation Sci 2010;5:76.
Bodenheimer T, Young DM, MacGregor K, Holtrop JS. Practice-based research in primary care: facilitator of, or barrier to, practice improvement? Ann Fam Med 2005;3(Suppl. 2):S28–32.
Boström A-M, Kajermo KN, Nordström G, Wallin L. Barriers to research utilization and research use among registered nurses working in the care of older people: does the BARRIERS scale discriminate between research users and non-research users on perceptions of barriers? Implementation Sci 2008;3:24.
Bowen S, Erickson T, Martens P, Crockett S. More than ‘using research’: the real challenges in promoting evidence-informed decision-making. Healthcare Policy;4:87–102.
Bowen S, Martens P, Team TNTK. Demystifying knowledge translation: learning from the community. J Health Serv Res Policy 2005;10:203–11.
Bowman J, Llewellyn G. Clinical outcomes research from the occupational therapist's perspective. Occup Ther Int 2002;9:145–66.
Boyde M, Jen C, Henderson A, Winch S. A clinical development unit in cardiology: the way forward. Int J Nurs Prac 2005;11:134–9.
Boyko JA, Lavis JN, Dobbins M, Souza NM. Reliability of a tool for measuring theory of planned behaviour constructs for use in evaluating research use in policymaking. Health Res Policy Syst 2011;9:29.
Buckley DI, Calvert JF, Lapidus JA, Morris CD. Chronic opioid therapy and preventive services in rural primary care: an Oregon rural practice-based research network study. Ann Fam Med 2010;8:237–44.
Bullock A, Morris ZS, Atwell C. Collaboration between health services managers and researchers: making a difference? J Health Serv Res Policy 2012;17(Suppl. 2):2–10.
Bullock A, Morris ZS, Atwell C. A formative evaluation of the Service Delivery and Organisation (SDO) management fellowships. Final report. Norwich: Queen's Printer and Controller of HMSO; 2012.
Burchett HED, Lavis JN, Mayhew SH, Dobrow MJ. Perceptions of the usefulness and use of research conducted in other countries. Evid Policy 2012;8:7–16.
Buse K, Tanaka S. Global public–private health partnerships: lessons learned from ten years of experience and evaluation. Int Dent J 2011;61(Suppl. 2):2–10.
Bush A. How has research in the last five years changed my clinical practice? Arch Dis Child 2005;90:832–6.
Butler M, Collins R, Drennan J, Halligan P, O’Mathunda DP, Schultz TJ, et al. Hospital nurse staffing models and patient and staff-related outcomes. Cochrane Database Syst Rev 2011;7:CD007019.
Buxton M, Elliott R, Hanney S, Henkel M, Keen J, Sculpher MJ, et al. Assessing payback from Department of Health research and development: preliminary report – volume two: eight case studies. Uxbridge: Health Economics Research Group, Brunel University; 1994.
Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy 1996;1:35–43.
Cambrosio A, Keating P, Mercier S, Lewison G, Mogoutov A. Mapping the emergence and development of translational cancer research. Eur J Cancer 2006;42:3140–8.
Caminiti C, Scoditti U, Diodati F, Passalacqua R. How to promote, improve and test adherence to scientific evidence in clinical practice. BMC Health Serv Res 2005;5:62.
Capo KM, Rutledge DR. Applying managed care performance measures in community pharmacy-based outcomes research. J Am Pharm Assoc 1999;39:384–8.
Carpenter WR, Fortune-Greeley AK, Zullig LL, Lee S-Y, Weiner BJ. Sustainability and performance of the National Cancer Institute's Community Clinical Oncology Program. Contemp Clin Trials 2012;33:46–54.
Cartwright T, Richards DA, Boehm KA. Cancer of the pancreas: are we making progress? A review of studies in the US Oncology Research Network. Cancer Control 2008;15:308–13.
Chagnon F, Pouliot L, Malo C, Gervais MJ, Pigeon M-E. Comparison of determinants of research knowledge utilization by practitioners and administrators in the field of child and family social services. Implementation Sci 2010;5:41.
Chalkidou K, Tunis S, Lopert R, Rochaix L, Sawacki PT, Nasser M, et al. Comparative effectiveness research and evidence-based health policy: experience from four countries. Milbank Q 2009;87:339–67.
Chan GK, Barnason S, Dakin CL, Gillespie G, Kamienski MC, Stapleton S, et al. Barriers and perceived needs for understanding and using research among emergency nurses. J Emerg Nurs 2011;37:24–31.
Chiu L-F. Minority ethnic women and cervical screening: a matter of action or research? Prim Health Care Res Dev 2004;5:104–16.
Christmas C, Durso SC, Kravet SJ, Wright SM. Advantages and challenges of working as a clinician in an academic department of medicine: academic clinicians’ perspectives. J Grad Med Educ 2010;2:478–84.
Chu C, Umanski G, Blank A, Grossberg R, Selwyn PA. HIV-infected patients and treatment outcomes: an equivalence study of community-located, primary care-based HIV treatment vs. hospital-based specialty care in the Bronx, New York. AIDS Care 2010;22:1522–9.
Ciliska D, Mitchell A, Baumann A, Sheppard K, Van Berkel C, Adam V, et al. Changing nursing practice--trisectoral collaboration in decision making. Can J Nurs Admin 1996;9:60–73.
Clark L, Fink R, Pennington K, Jones K. Nurses’ reflections on pain management in a nursing home setting. Pain Manag Nurs 2006;7:71–7.
Clarke M, Loudon K. Effects on patients of their healthcare practitioner's or institution's participation in clinical trials: a systematic review. Trials 2011;12:16.
Clarkson J. Experience of clinical trials in general dental practice. Adv Dent Res 2005;18:39–41.
Cleary PD, Edgman-Levitan S, Walker JD, Gerteis M, Delblanco TL. Using patient reports to improve medical care: a preliminary report from 10 hospitals. Qual Manag Health Care 1993;2:31–8.
Cohen D, McDaniel RR Jr, Crabtree BF, Ruhe MC, Weyer SM, Tallia A, et al. A practice change model for quality improvement in primary care practice. J Healthcare Manag 2004;49:155–68, discussion 169–70.
Cohen WM, Levinthal DA. Innovation and learning: the two faces of R&D. Econ J 1989;99:569–96.
Cohen WM, Levinthal DA. Absorptive capacity: a new perspective on learning and innovation. Admin Sci Q 1990;35:128–52.
Colenda CC, Wagenaar DB, Mickus M, Marcus SC, Tanielian T, Pincus HA, et al. Comparing clinical practice with guideline recommendations for the treatment of depression in geriatric patients: findings from the APA practice research network. Am J Geriatr Psychiatr 2003;11:448–57.
Lifeline Registry of Endovascular Aneurysm Repair Steering Committee. Lifeline registry: collaborative evaluation of endovascular aneurysm repair. J Vasc Surg 2001;34:1139–46.
Contandriopoulos D, Lemire M, Denis J-L, Tremblay É. Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature. Milbank Q 2010;88:444–83.
Cooper GS, Chak A, Harper DL, Pine M, Rosenthal GE. Care of patients with upper gastrointestinal hemorrhage in academic medical centers: a community-based comparison. Gastroenterology 1996;111:385–90.
Cooperberg MR, Broering JM, Litwin MS, Lubeck DP, Mehta SS, Henning JM, et al. The contemporary management of prostate cancer in the United States: lessons from the cancer of the prostate strategic urologic research endeavor (CapSURE), a national disease registry. J Urol 2004;171:1393–401.
Costantini O, Huck K, Carlson MD, Boyd K, Buchter CM, Raiz P, et al. Impact of a guideline-based disease management team on outcomes of hospitalized patients with congestive heart failure. Arch Intern Med 2001;161:177–82.
Cox JD. Evolution and accomplishments of the Radiation Therapy Oncology Group. Int J Rad Oncol Biol Phys 1995;33:747–54.
Cox JD. Evidence in oncology. The Janeway lecture 2000. Cancer J 2000;6:351–7.
Cox K, Mahone I, Merwin E. Improving the quality of rural nursing care. Annu Rev Nurs Res 2008;26:175–94.
Craig TJ, Perlin JB, Fleming BB. Self-reported performance improvement strategies of highly successful Veterans Health Administration facilities. Am J Med Qual 2007;22:438–44.
Craig TJ, Petzel R. Management perspectives on research contributions to practice through collaboration in the U.S. Veterans Health Administration: QUERI Series. Implementation Sci 2009;4:8.
Crites GE, McNamara MC, Akl E, Richardson WS, Umscheid C, Nishikawa J. Evidence in the learning organization. Health Res Policy Syst 2009;7:4.
Cruickshank D. The experience of action research in practice. Contemp Nurse 1996;5:127–32.
Curran GM, Mukherjee S, Allee E, Owen RR. A process for developing an implementation intervention: QUERI Series. Implementation Sci 2008;3:17.
Currow DC, Abernethy AP, Shelby-James TM, Phillips PA. The impact of conducting a regional palliative care clinical study. Palliat Med 2006;20:735–43.
Da Silva-Filho AL, Martins PALS, Parente MP, Saleme CS, Roza T, Pinotti M, et al. Translation of biomechanics research to urogynecology. Arch Gynecol Obstetr 2010;282:149–55.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowerty JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Sci 2009;4:50.
Daniel D, Maund C, Butler K. Community hospital participation in a pilot project for venous thromboembolism quality measures: learning, collaboration, and early improvement. Jt Comm J Qual Patient Saf 2010;36:301–9.
Daniels K, Lewin S. Translating research into maternal health care policy: a qualitative case study of the use of evidence in policies for the treatment of eclampsia and pre-eclampsia in South Africa. Health Res Policy Syst 2008;6:12.
Darer J, Pronovost P, Bass EB. Use and evaluation of critical pathways in hospitals. Effective Clin Prac 2002;5:114–19.
Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282:867–74.
Davis D, Thomson M, Oxman AD, Haynes RB. Evidence for the effectiveness of CME. A review of 50 randomized controlled trials. JAMA 1992;268:1111–17.
Davis P, Howden-Chapman P. Translating research findings into health policy. Soc Sci Med 1996;43:865–72.
Day J, Higgins I, Koch T. The process of practice redesign in delirium care for hospitalised older people: a participatory action research study. Int J Nurs Stud 2009;46:13–22.
Day T, Wainwright SP, Wilson-Barnett J. An evaluation of a teaching intervention to improve the practice of endotracheal suctioning in intensive care units. J Clin Nurs 2001;10:682–96.
De Goede J, Van Bon-Martens MJH, Putters K, Van Oers HAM. Looking for interaction: quantitative measurement of research utilization by Dutch local health officials. Health Res Policy Syst 2012;10:9.
De Muylder RR, Tonglet RR, Nackers F, Boland B. Randomised evaluation of a specific training of general practitioners in cardiovascular prevention. Acta Cardiol 2005;60:199–205.
Demakis JG, McQueen L, Kizer KW, Feussner JR. Quality Enhancement Research Initiative (QUERI): a collaboration between research and clinical practice. Med Care 2000;38:17–25.
Denis J-L, Hébert Y, Langley A, Lozeau D, Trottier L-H. Explaining diffusion patterns for complex health care innovations. Health Care Manag Rev 2002;27:60–73.
Denis J-L, Lehoux P, Hivon M, Champagne F. Creating a new articulation between research and practice through policy? The views and experiences of researchers and practitioners. J Health Serv Res Policy 2003;8(Suppl. 2):44–50.
Denis J-L, Lomas J. Convergent evolution: the academic and policy roots of collaborative research. J Health Serv Res Policy 2003;8(Suppl. 2):1–6.
Denis J-L, Lomas J, Stipich N. Creating receptor capacity for research in the health system: the Executive Training for Research Application (EXTRA) program in Canada. J Health Serv Res Policy 2008;13(Suppl. 1):1–7.
Desai J, Solberg L, Clark C, Reger L, Pearson T, Bishop D, et al. Improving diabetes care and outcomes: the secondary benefits of a public health-managed care research collaboration. J Public Health Manag Pract 2003;9(Suppl.):S36–S43.
Dhalla IA, Detsky AS. Aligning incentives for academic physicians to improve health care quality. JAMA 2011;305:932–3.
Disch J, Chlan L, Mueller C, Akinjuotu T, Sabo J, Feldt K, et al. The Densford Clinical Scholars Program: improving patient care through research partnerships. J Nurs Admin 2006;36:567–74.
Dixon S, Coleman P, Nicholl J, Brennan A, Touch S. Evaluation of the impact of a technology appraisal process in England: the South and West Development and Evaluation Committee. J Health Serv Res Policy 2003;8:18–24.
Dobbins M, Robeson P, Ciliska D, Hanna S, Cameron R, O’Mara L, et al. A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implementation Sci 2009;4:23.
Dobbins M, Rosenbaum P, Plews N, Law M, Fysh A. Information transfer: what do decision makers want and need from researchers? Implementation Sci 2007;2:20.
Doherty SR, Jones PD. Use of an ‘evidence-based implementation’ strategy to implement evidence-based care of asthma into rural district hospital emergency departments. Rural Remote Health 2006;6:529.
D’Onofrio G, Nadel ES, Degutis LC, Sullivan LM, Casper K, Bernstein E, et al. Improving emergency medicine residents’ approach to patients with alcohol problems: a controlled educational trial. Ann Emerg Med 2002;40:50–62.
Donovan JL, Schroeder WS, Tran MT, Foster K, Forrest A, Lee TB, et al. Assessment of eptifibatide dosing in renal impairment before and after in-service education provided by pharmacists. J Manag Care Pharm 2007;13:598–606.
Dopson S, Locock L, Chambers D, Gabbay J. Implementation of evidence-based medicine: evaluation of the Promoting Action on Clinical Effectiveness programme. J Health Serv Res Policy 2001;6:23–31.
Drabo M, Dauby C, Macq J, Seck I, Ouendo EM, Sani I, et al. An action research network to improve the quality of tuberculosis care in West Africa. Cahiers Sante 2007;17:79–86.
Driessen MT, Groenewoud K, Proper KI, Anema JR, Bongers PM, Van Der Beek AJ. What are possible barriers and facilitators to implementation of a participatory ergonomics programme? Implementation Sci 2010;5:64.
Du Bois A, Rochon J, Pfisterer J, Hoskins WJ. Variations in institutional infrastructure, physician specialization and experience, and outcome in ovarian cancer: a systematic review. Gynecol Oncol 2009;112:422–36.
Eagar K, Cromwell D, Owen A, Senior K, Gordon R, Green J. Health services research and development in practice: an Australian experience. J Health Serv Res Policy 2003;8(Suppl. 2):7–13.
East L, Bond M, Hillman R. Bringing about change in health care through action research. Nottingham: Nottingham University; 1999.
East L, Robinson J. Change in process: bringing about change in health care through action research. J Clin Nurs 1994;3:57–61.
Eisenberg JM. Continuing education meets the learning organization: the challenge of a systems approach to patient study. J Contin Educ Health Prof 2002;20:197–207.
El-Jardali F, Lavis JN, Ataya N, Jamal D. Use of health systems and policy research evidence in the health policymaking in eastern Mediterranean countries: views and practices of researchers. Implementation Sci 2012;7:2.
Ellen ME, Lavis JN, Ouimet M, Grimshaw J, Bédard P-O. Determining research knowledge infrastructure for healthcare systems: a qualitative study. Implementation Sci 2011;6:60.
Elliot AE. An analysis of participation, quality of care and efficiency outcomes of an inter-organizational network of nursing homes. Dissertation Abstr Int Section B Sci Engin 2007;68:2231.
Elliott H, Popay J. How are policy makers using evidence? Models of research utilisation and local NHS policy making. J Epidemiol Community Health 2000;54:461–8.
Elliot TE, Elliot BA, Regal RR, Reiner CM, Haller IV, Crouse BJ, et al. Improving rural cancer patients’ outcomes: a group-randomized trial. J Rural Health 2004;20:26–35.
Elwyn G, Taubert M, Kowalczuk J. Sticky knowledge: a possible model for investigating implementation in healthcare contexts. Implementation Sci 2007;2:44.
Englesbe MJ, Pelletier SJ, Magee JC, Gauger P, Schifftner T, Henderson WG, et al. Seasonal variation in surgical outcomes as measured by the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP). Ann Surg 2007;246:455–6.
Evans DC, Nichol WP, Perlin JB. Effect of the implementation of an enterprise-wide electronic health record on productivity in the Veterans Health Administration. Health Econ Policy Law 2006;1:163–9.
Fanaroff AA, Hack M, Walsh MC. The NICHD neonatal research network: changes in practice and outcomes during the first 15 years. Semin Perinatol 2003;27:281–7.
Feifer C, Ornstein SM. Strategies for increasing adherence to clinical guidelines and improving patient outcomes in small primary care practices. J Comm J Qual Patient Saf 2004;30:432–41.
Feifer C, Ornstein SM, Nietert PJ, Jenkins RG. System supports for chronic illness care and their relationship to clinical outcomes. Top Health Info Manag 2001;22:65–72.
Feldman AM, Weitz H, Merli G, Decaro M, Brechbill AL, Adams S, et al. The physician-hospital team: a successful approach to improving care in a large academic medical center. Acad Med 2006;8:35–41.
Feldman PH, Kane RL. Strengthening research to improve the practice and management of long-term care. Milbank Q 2003;81:179–220.
Ferlie E, Crilly T, Jashapara A, Peckham A. Knowledge mobilisation in healthcare: a critical review of health sector and generic management literature. Soc Sci Med 2012;74:1297–304.
Ferlie E, Wood M. Novel mode of knowledge production? Producers and consumers in health services research. J Health Serv Res Policy 2003;8:51–7.
Fletcher M, Beringer A. Introduction to action research. Paediatr Nurs 2009;21:30.
Flotte TJ. Research by pathologists in the United States: analysis of publications. Mod Pathol 1993;6:484–6.
Fortney J, Enderle M, McDougall S, Clothier J, Otero J, Altman L, et al. Implementation outcomes of evidence-based quality improvement for depression in VA community based outpatient clinics. Implementation Sci 2012;7:30.
Fowler B, Burlina A, Kozich V, Vaney-Saban C. Quality of analytical performance in inherited metabolic disorders: the role of ERNDIM. J Inherit Metab Dis 2008;31:680–9.
Francis J, Perlin JB. Improving performance through knowledge translation in the Veterans Health Administration. J Contin Educ Health Prof 2006;26:63–71.
Frayne SM, Freund KM, Skinner KM, Ash AS, Moskowitz MA. Depression management in medical clinics: does healthcare sector make a difference? Am J Med Qual 2004;19:28–36.
Freda MC. Nursing's contribution to the literature on preterm labor and birth. J Obstetr Gynecol Neonatal Nurs 2003;32:659–67.
French B, Thomas LH, Baker P, Burton CR, Pennington L, Roddam H. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context. Implementation Sci 2009;4:28.
Froggatt K, Hockley J. Action research in palliative care: defining an evaluation methodology. Palliat Med 2001;25:782–7.
Gabler NB, Duan N, Vohra S, Kravitz RL. N-of-1 trials in the medical literature: a systematic review. Med Care 2011;49:761–8.
Gao J, Moran E, Almenoff PL, Render ML, Campbell J, Jha AK. Variations in efficiency and the relationship to quality of care in the veterans health system. Health Aff 2011;30:655–63.
Garcia FA, Miller HB, Huggins GR, Gordon TA. Effect of academic affiliation and obstetric volume on clinical outcome and cost of childbirth. Obstetr Gynecol 2001;97:567–76.
Garrido T, Barbeau R. The Northern California Perinatal Research Unit: a hybrid model bridging research, quality improvement and clinical practice. Permanente J 2010;14:51–6.
Garwick AW, Auger S. Participatory action research: the Indian Family Stories Project. Nurs Outlook 2003;51:261–6.
Gawlinski A, Miller PS. Advancing nursing research through a mentorship program for staff nurses. AACN Adv Crit Care 2011;22:190–200.
Gholami J, Majdzadeh R, Nedjat S, Nedjat S, Maleki K, Ashoorkhani M, et al. How should we assess knowledge translation in research organizations; designing a knowledge translation self-assessment tool for research institutes (SATORI). Health Res Policy Syst 2011;9:10.
Gibson BE, Darrah J, Cameron D, Hashemi G, Kingsnorth S, Lepage C, et al. Revisiting therapy assumptions in children's rehabilitation: clinical and research implications. Disabil Rehabil 2009;31:1446–53.
Gilbert GH, Richman JS, Qvist V, Pihlstrom DJ, Foy PJ, Gordan VV. Change in stated clinical practice associated with participation in the Dental Practice-Based Research Network. Gen Dent 2010;58:520–8.
Gilbert GH, Williams OD, Rindal DB, Pihlstrom DJ, Benjamin PL, Wallace MC, et al. The creation and development of the dental practice-based research network. J Am Dent Assoc 2008;139:74–81.
Gillum LA, Johnston SC. Characteristics of academic medical centers and ischemic stroke outcomes. Stroke 2001;32:2137–42.
Ginsburg LR, Lewis S, Zackheim L, Casebeer A. Revisiting interaction in knowledge translation. Implementation Sci 2007;2:34.
Glaser EM, Taylor SH. Factors influencing the success of applied research. Am Psychol 1973;28:140–6.
Glasson J, Chang E, Chenoweth L, Hancock K, Hall T, Hill-Murray F, et al. Evaluation of a model of nursing care for older patients using participatory action research in an acute medical ward. J Clin Nurs 2006;15:588–98.
Goderis G, Borgermans L, Mathieu C, Van Den Broeke C, Hannes K, Heyrman J, et al. Barriers and facilitators to evidence based care of type 2 diabetes patients: experiences of general practitioners participating to a quality improvement program. Implementation Sci 2009;4:41.
Goering P, Butterill D, Jacobson N, Sturtevant D. Linkage and exchange at the organizational level: a model of collaboration between research and policy. J Health Serv Res Policy 2003;8(Suppl. 2):14–19.
Goetz MB, Bowman C, Hoang T, Anaya H, Osborn T, Gifford AL, et al. Implementing and evaluating a regional strategy to improve testing rates in VA patients at risk for HIV, utilizing the QUERI process as a guiding framework: QUERI Series. Implementation Sci 2008;3:16.
Gold M, Taylor EF. Moving research into practice: lessons from the US Agency for Healthcare Research and Quality's IDSRN program. Implementation Sci 2007;2:9.
Graham ID, Tetroe J. Learning from the U.S. Department of Veterans Affairs Quality Enhancement Research Initiative: QUERI Series. Implementation Sci 2009;4:13.
Granger BB, Prvu-Bettger J, Aucoin J, Fuchs MA, Mitchell PH, Holditch-Davis D, et al. An academic-health service partnership in nursing: lessons from the field. J Nurs Scholarsh 2012;44:71–9.
Gravel K, Légaré F, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: a systematic review of health professionals’ perceptions. Implementation Sci 2006;1:16.
Green LA, Wyszewianski L, Lowery JC, Kowalski CP, Krein SL. An observational study of the effectiveness of practice guideline implementation strategies examined according to physicians’ cognitive styles. Implementation Sci 2007;2:41.
Griffey RT, Lo HG, Burdick E, Keohane C, Bates DW. Guided medication dosing for elderly emergency patients using real-time, computerized decision support. J Am Med Inform Assoc 2012,19:86–93.
Grimshaw JM, Eccles MP, Labis JN, Hill SJ, Squires, JE. Knowledge translation of research findings. Implementation Sci 2012;7:50.
Gross CP, Krumholz HM, Van Wye G, Emanuel EJ, Wendler D. Does random treatment assignment cause harm to research participants? PLOS Med 2006;3:e188.
Hagedorn HJ, Heideman PW. The relationship between baseline organizational readiness to change assessment subscale scores and implementation of hepatitis prevention services in substance use disorders treatment clinics: a case study. Implementation Sci 2012;5:46.
Hakkennes S, Green S. Measures for assessing practice change in medical practitioners. Implementation Sci 2006;1:29.
Halbreich U, Smail N, Tu X, Halbreich J. Participation in clinical trials may improve care of acute schizophrenia inpatients in a general hospital. CNS Spectr 2008;13:757–61.
Hanney S, Packwood T, Buxton M. Evaluating the benefits from health research and development centres: a categorization, a model and examples of application. Eval 2000;6:137–60.
Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst 2003;1:2.
Harries U, Elliott H, Higgins A. Evidence-based policy-making in the NHS: exploring the interface between research and the commissioning process. J Pub Health Med 1999;21:29–36.
Harris R, Beardmore C. The research agenda and the role of the therapeutic radiographer: the College of Radiographers perspective. J Radiother Prac 2009;8:99–104.
Hart MK, Millard MW. Approaches to chronic disease management for asthma and chronic obstructive pulmonary disease: strategies through the continuum of care. Proc Baylor Univ Med Center 2010;23:223–9.
Hart P, Eaton L, Buckner M, Morrow BN, Barrett, DT, Fraser DD, et al. Effectiveness of a computer-based educational program on nurses’ knowledge, attitude, and skill level related to evidence-based practice. Worldviews Evid Based Nurs 2008;5:75–84.
Harvey G, Fitzgerald L, Fielden S, McBride A, Waterman H, Bamford D, et al. The NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRC) for Greater Manchester: combining empirical, theoretical and experiential evidence to design and evaluate a large-scale implementation strategy. Implementation Sci 2011;6:96.
Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat A-HS, Dellinger EP, et al. Changes in safety attitude and relationship to decreased postoperative morbidity and mortality following implementation of a checklist-based surgical safety intervention. BMJ Qual Safety 2011;20:102–7.
Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat AH, Dellinger EP, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. New Engl J Med 2009;360:491–9.
Heinrichs M, Beekman R, Limburg M. Simulation to estimate the capacity of a stroke unit. Stud Health Technol Inform 2000;77:47–50.
Higgins LP, Hawkins JW. Screening for abuse during pregnancy: implementing a multisite program. Am J Mater Child Nurs 2005;30:109–14.
Hills M, Mullett J. Women-centred care: working collaboratively to develop gender inclusive health policy. Health Care Women Int 2002;23:84–97.
Hockey PM, Bates DW. Physicians’ identification of factors associated with quality in high- and low-performing hospitals. J Comm J Qual Patient Saf 2010;36:217–23.
Holmboe ES, Prince L, Green M. Teaching and improving quality of care in a primary care internal medicine residency clinic. Acad Med 2005;80:571–7.
Holmes B, Scarrow G, Schellenberg M. Translating evidence into practice: the role of health research funders. Implementation Sci 2012;7:39.
Holberg HA. Building excellence in an academic medical center. N J Med 1996;93:59–61.
Horvath MM, Cozart H, Ahmad A, Langman MK, Ferranti J. Sharing adverse drug event data using business intelligence technology. J Patient Saf 2009;5:35–41.
Hoskins R, Gow A, McDowell J. Corporate solutions to caseload management – an evaluation. Community Pract 2007;80:20–4.
Hutchinson E, Parkhurst J, Phiri S, Gibb DM, Chishinga N, Droti B, et al. National policy development for cotrimoxazole prophylaxis in Malawi, Uganda and Zambia: the relationship between context, evidence and links. Health Res Policy Syst 2011;9:S6.
Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy 2002;7:239–44.
Jackson GL, Powell AA, Ordin DL, Schlosser JE, Murawsky J, Hers, J, et al. Developing and sustaining quality improvement partnerships in the VA: the Colorectal Cancer Care Collaborative. J Gen Intern Med 2010;25(Suppl. 1):38–43.
Jackson GL, Yano EM, Edelman D, Krein SL, Ibrahim MA, Carey TS, et al. Veterans Affairs primary care organizational characteristics associated with better diabetes control. Am J Manag Care 2005;11:225–37.
Jacob R, Battista RN. Assessing technology assessment. Early results of the Quebec experience. Int J Technol Assess Health Care 1993;9:564–72.
Jacob R, McGregor M. Assessing the impact of health technology assessment. Int J Technol Assess Health Care 1997;13:68–80.
Jagosh J, Macaulay A, Pluye J, Salsberg J, Bush P, Henderson J, et al. Uncovering the benefits of participatory research: implications of a realist review for health research and practice. Milbank Q 2012;90:311–46.
Janni W, Kiechle M, Sommer H, Rack B, Gauger K, Heinrigs M, et al. Participation in study improves treatment strategies and individual patient care in the participating centres. Geburtshilfe und Frauenheilkunde 2005;65:966–73.
Jansen MW, Van Oers HA, Kok G, De Vries NK. Public health: disconnections between policy, practice and research. Health Res Pol Syst 2010;8:37.
Johnson PA, Bookman A, Bailyn L, Harrington M, Orton P. Innovation in ambulatory care: a collaborative approach to redesigning the health care workplace. Acad Med 2011;86:211–16.
Jonathan L, Brown D. Research and advice giving: a functional view of evidence-informed policy advice in a Canadian Ministry of Health. Milbank Q, 2009;87:903–26.
Jones S, James E, Prasad S. Disease registries and outcomes research in children: focus on lysosomal storage disorders. Paediatr Drugs 2011;13:33–47.
Joshi M, Kazandjian V, Martin P, Minogue W, Christian D, Silverman N, et al. A comprehensive grassroots model for statewide safety improvement. Joint Comm J Qual Patient Saf 2005;31:671–7.
Joubert L. Academic–practice partnerships in practice research: a cultural shift for health social workers. Soc Work Health Care, 2006;43:151–61.
Juillard C, Lashoher A, Sewell CA, Uddin S, Griffith JG, Chang DC. A national analysis of the relationship between hospital volume, academic center status, and surgical outcomes for abdominal hysterectomy done for leiomyoma. J Am Coll Surg 2009;208:599–606.
Kahan NR, Kahan E, Waitman D-A, Kitai E, Chintz DP. The tools of an evidence-based culture: implementing clinical-practice guidelines in an Israeli HMO. Acad Med 2009;84:1217–25.
Kahn K, Ryan G, Beckett M, Taylor S, Berrebi C, Cho M, et al. Bridging the gap between basic science and clinical practice: a role for community clinicians. Implementation Sci 2011;6:34.
Kajermo KN, Boström A-M, Thompson DS, Hutchinson AM, Estabrooks CA, Wallin L. The BARRIERS scale – the barriers to research utilization scale: a systematic review. Implementation Sci 2010;5:32.
Kanavos P, Sullivan R, Lewison G, Schurer W, Eckhouse S, Vlachopioti Z. The role of funding and policies on innovation in cancer drug development. Ecancermedicalscience 2010;4:164.
Karlson EW, Liang MH, Eaton H, Huang J, Fitzgerald L, Rogers MP, et al. A randomized clinical trial of a psychoeducational intervention to improve outcomes in systemic lupus erythematosus. Arthritis Rheum 2004;50:1832–41.
Kay EJ, Ward N, Locker D. A general dental practice research network: impact of oral health in general dental practice patients. Br Dent J 2003;194:611–21.
Kazandjian VA, Lawthers J, Cernak CM, Pipesh FC. Relating outcomes to processes of care: the Maryland Hospital Association's Quality Indicator Project (QI Project). Jt Comm J Qual Improv 1993;19:530–8.
Kazandjian VA, Lied TR. Cesarean section rates: effects of participation in a performance measurement project. Jt Comm J Qual Improv 1998;24:187–96.
Keroack MA, Youngberg BJ, Cerese JL, Krsek C, Prellwitz LW, Trevelyan EW. Organizational factors associated with high academic medical centers. 2007;82:1178–86.
Khresheh R, Barclay L. Implementation of a new birth record in three hospitals in Jordan: a study of health system improvement. Health Policy Plan 2008;23:76–82.
Khresheh R, Homer C, Barclay L. A comparison of labour and birth outcomes in Jordan with WHO guidelines: a descriptive study using a new birth record. Midwifery Res 2009;25:e11–8.
Khungern J, Krairiksh M, Tassaniyom N, Sritanyarat W. Hospital quality improvement: a case study of a general hospital under the Ministry of Public Health. Thai J Nurs Res 2006;10:191–200.
Khuri SF, Daley J, Henderson W, Hur K, Demakis J, Aust JB, et al. The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. National VA Surgical Quality Improvement Program. Ann Surg 1998;228:491–507.
Khuri SF, Najjar SF, Daley J, Krasnicka B, Hossain M, Henderson WG, et al. Comparison of surgical outcomes between teaching and nonteaching hospitals in the Department of Veterans Affairs. Ann Surg 2001;234:370–3.
King G, Servais M, Kertoy M, Specht J, Currie M, Rosenbaum P, et al. A measure of community members’ perceptions of the impacts of research partnerships in health and social services. Eval Program Plann 2009;32:289–99.
Kirkevold M. The Norwegian teaching home program: developing a model for systematic practice development in the nursing home sector. Int J Older People Nurs 2008;3:282–6.
Kislov R, Harvey G, Walsh K. Collaborations for Leadership in Applied Health Research and Care: lessons from the theory of communities of practice. Implementation Sci 2011;6:64.
Kitson AL, Rycroft-Malone J, Harvey G, McCormack B, Seers K, Titchen A. Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. Implementation Sci 2008;3:1.
Kiwanuka-Tondo J, Snyder LB. The influence of organizational characteristics and campaign design elements on communication campaign quality: evidence from 91 Ugandan AIDS campaigns. J Health Commun 2002;7:59–77.
Kizer KW. The “New VA”: a national laboratory for health care quality management. Am J Med Qual 1999;14:3–20.
Kizer KW, Demakis JG, Feussner JR. Reinventing VA health care: systematizing quality improvement and quality innovation. Med Care 2000;38:7–16.
Klinger CJ, Marrone CM, Bass JL. Medical liaison survey 4: assessing tools used by medical liaisons, clinical trial involvement, and career strategies. Drug Inform J 2010;44:551–67.
Knoer SJ, Pastor JD 3rd, Phelps PK. Lessons learned from a pharmacy practice model change at an academic medical center. Am J Health Syst Pharm 2010;67:1862–9.
Knudsen HK, Roman PM. Modeling the use of innovations in private treatment organizations: the role of absorptive capacity. J Subst Abuse Treat 2004;26:51–9.
Kocaman G, Seren S, Lash AA, Kurt S, Bengu N, Yurumezoglu HA. Barriers to research utilisation by staff nurses in a university hospital. J Clin Nurs 2010;19:1908–18.
Kooby DA, Hawkins WG, Schmidt CM, Weber SM, Bentrem DJ, Gillespie TW, et al. A multicenter analysis of distal pancreatectomy for adenocarcinoma: is laparoscopic resection appropriate? J Am Coll Surg 2010;210:777–9.
Kothari A, Birch S, Charles C. “Interaction” and research utilisation in health policies and programs: does it work? Health Policy 2005;71:117–25.
Kothari A, Edwards N, Hamel N, Judd M. Is research working for you? Validating a tool to examine the capacity of health organizations to use research. Implementation Sci 2009;4:46.
Kothari A, Rudman D, Dobbins M, Rouse M, Sibbald S, Edwards N. The use of tacit and explicit knowledge in public health: a qualitative study. Implementation Sci 2012;7:20.
Kravitz RL, Duan N, Niedzinski,EJ, Hay MC, Saskia K, Weisner TS, et al. What ever happened to N-of-1 trials? insiders ‘ perspectives and a look to the future. Milbank Q 2008;86:533–55.
Krein SL, Bernstein SJ, Fletcher CE, Makki F, Goldzweig CL, Watts B, et al. Improving eye care for veterans with diabetes: an example of using the QUERI steps to move from evidence to implementation: QUERI Series. Implementation Sci 2008;3:18.
Kristensen HK, Borg T, Hounsgaard L. Facilitation of research-based evidence within occupational therapy in stroke rehabilitation. Br J Occup Ther 2011;74:473–83.
Krzyzanowska MK, Kaplan R, Sullivan R. How may clinical research improve healthcare outcomes? Ann Oncol 2011;22:vii10–vii15.
Kuehlein T, Goetz K, Laux G, Gutscher A, Szecsenyi J, Joo S. Antibiotics in urinary-tract infections. Sustained change in prescribing habits by practice test and self-reflection: a mixed methods before-after study. BMJ Qual Saf 2011;20:522–6.
Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. J Health Serv Res Policy 2007;12(Suppl. 1):23–31.
Lammers J, Veninga D, Speelman P, Hoekstra J, Lombarts K. Performance of Dutch hospitals in the management of splenectomized patients. J Hosp Med Online 2010;5:466–70.
Lancellot M. CNS combats pressure ulcers with skin and wound assessment team (SWAT). Clin Nurse Spec 1996;10:154–60.
Lane JP, Flagg JL. Translating three states of knowledge – discovery, invention, and innovation. Implementation Sci 2010;5:9.
Lane JP, Roger JD. Engaging national organizations for knowledge translation: comparative case studies in knowledge value mapping. Implementation Sci 2011;6:106.
Lane PJ, Koka BR, Pathak S. The reification of absorptive capacity: a critical review and rejuvenation of the construct. Acad Manag Rev 2006;31:833–63.
Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med 2007;49:355–63.
Langguth B, Goodey R, Azevedo A, Bjorne A, Cacace A, Crocetti A, et al. Consensus for tinnitus patient assessment and treatment outcome measurement: Tinnitus Research Initiative meeting, Regensburg, July 2006. Prog Brain Res 2007;166:525–36.
Lapcevic WA, French DD, Campbell RR. All-cause mortality rates of hip fractures treated in the VHA: do they differ from Medicare facilities? J Am Med Dir Assoc 2010;11:116–19.
Laprise R, Thivierge R, Gosselin G, Bujas-Bobanovic M, Vandal S, Paquette D, et al. Improved cardiovascular prevention using best CME practices: a randomized trial. J Contin Educ Health Prof 2009;29:16–31.
Larrabee JH, Sions J, Fanning M, Withrow ML, Ferretti A. Evaluation of a program to increase evidence-based practice change. J Nurs Adm 2007;37:302–10.
Laskowitz DT, Drucker RP, Parsonnet J, Cross PC, Gesundheit N. Engaging students in dedicated research and scholarship during medical school: the long-term experiences at Duke and Stanford. Acad Med 2010;85:419–28.
Lavis J, Davies H, Oxman A, Denis J-L, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy 2005;10:35–48.
Lavis JN, Oxman AD, Moynihan R, Paulsen EJ. Evidence-informed health policy 1 – synthesis of findings from a multi-method study of organizations that support the use of research evidence. Implementation Sci 2008;3:53.
Lavis JN, Robertson D, Woodside JM, McLeod CB. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q 2003;81:171–2; 221–48.
Lavis JN, Ross SE, Hurley JE, Hohenadel JM, Stoddart GL, Woodward CA, et al. Examining the role of health services research in public policymaking. Milbank Q 2002;80:125–54.
Lawhorne LW, Ouslande, JG, Parmelee PA, and Urinary Incontinence Work Group Of The Amda-F Ltc Research Network. Clinical practice guidelines, process improvement teams, and performance on a quality indicator for urinary incontinence: a pilot study. J Am Med Dir Assoc 2008;9:504–8.
Lawrence DM, Mattingly PH, Ludden JM. Trusting in the future: the distinct advantage of nonprofit HMOs. Milbank Q 1997;75:5–10.
Lehnert BE, Bree RL. Analysis of appropriateness of outpatient CT and MRI referred from primary care clinics at an academic medical center: how critical is the need for improved decision support? JACR 2010;7:192–7.
Lehoux P, Hivon M, Denis J-L, Tailliez S. What medical specialists like and dislike about health technology assessment reports. J Health Serv Res Policy 2009;14:197–203.
Lekkou S, Strauss HI. Social work students’ fears of their professional competence – Narratives and supervision reflections: a multiple cultural study. Rev Clin Pharmacol Pharmacokinet Int Ed 2011;25:149–58.
Levine SR, Adamowicz D, Johnston KC. “How to” guide for clinicians interested in becoming involved in clinical stroke research. CONTINUUM Lifelong Learn Neurol 2008;14:117–36.
Leyden JE, Doherty GA, Hanley A, McNamara DA, Shields C, Leader M, et al. Quality of colonoscopy performance among gastroenterology and surgical trainees: a need for common training standards for all trainees? Endoscopy 2011;43:935–40.
Li LC, Grimshaw JM, Nielsen C, Judd M, Coyte PC, Graham ID. Use of communities of practice in business and health care sectors: a systematic review. Implementation Sci 2009;4:27.
Lindamer LA, Lebowitz B, Hough RL, Garcia P, Aguirre A, Halpain MC, et al. Establishing an implementation network: lessons learned from community-based participatory research. Implementation Sci 2009;4:17.
Lindbloom EJ, Ewigman BG, Hickner JM. Practice-based research networks. Med Care 2004;42:III-45–III-49.
Lindeman M, Smith R, Vrantsidis F, Gough J. Action research in aged care: a model for practice change and development. Geriaction 2002;20:10–14.
Lindeman MA, Black K, Smith R, Gough J, Bryce A, Gilsenan B, et al. Changing practice in residential aged care using participatory methods. Educ Health 2003;16:22–31.
Lodge A, Bamford D. Health service improvement through diagnostic waiting list management. Leadership Health Serv 2007;20:254–65.
Lofman P, Pietila A-M, Haggman-Laitila A. Self-evaluation and peer review – an example of action research in promoting self-determination of patients with rheumatoid arthritis. J Clin Nurs 2007;16:84–94.
Logan CA, Cressey BD, Wu RY, Janicki AJ, Chen CX, Bolourchi ML, et al. Monitoring universal protocol compliance through real-time clandestine observation by medical students results in performance improvement. J Surg Educ 2012;69:41–6.
Lomas J, Fulop N, Gagnon D, Allen P. On being a good listener: setting priorities for applied health services research. Milbank Q 2003;81:363–88.
Longe RL, Taylor AT, Wade WE, Spruill WJ. Enhancing outcomes research: the experience of a community pharmacist research network. Dis Manag Health Outcomes 1999;6:261–8.
Longo WE, Cheadle W, Fink A, Kozol R, Depalma R, Rege R, et al. The role of the Veterans Affairs Medical Centers in patient care, surgical education, research and faculty development. Am J Surg 2005;190:662–75.
Lopez V. Critical care nursing research priorities in Hong Kong. J Adv Nurs 2003;43:578–87.
Loth C, Schippers GM, Hart HT, Van De Wijngaart G. Enhancing the quality of nursing care in methadone substitute clinics using action research: a process evaluation. J Adv Nurs 2007;57:422–31.
Lugo NR. Adolescent health: immunizations are key to prevention: a White Paper by the Nurse Practitioner Healthcare Foundation. Am J Nurse Pract 2009;13:44–50.
Lukas CV, Holmes SK, Cohen AB, Restuccia J, Cramer IE, Shwartz M, et al. Transformational change in health care systems: an organizational model. Health Care Manage Rev 2007;32:309–20.
Lurie N, Rich EC, Simpson DE, Meyer J, Schiedermayer DL, Goodman JL, et al. Pharmaceutical representatives in academic medical centers: interaction with faculty and housestaff. J Gen Intern Med 1990;5:240–3.
Luther SL, Nelson A, Powell-Cope G. Provider attitudes and beliefs about clinical practice guidelines. SCI Nurs 2004;21:206–12.
Lutz W, Saunders SM, Leon SC, Martinovich Z, Kosfelder J, Schulte D, et al. Empirically and clinically useful decision making in psychotherapy: differential predictions with treatment response models. Psychol Assess 2006;18:133–41.
Maar MA, Erskine B, McGregor L, Larose TL, Sutherland ME, Graham D, et al. Innovations on a shoestring: a study of a collaborative community-based Aboriginal mental health service model in rural Canada. Int J Mental Health Syst 2009;3:27.
Maar MA, Seymour A, Sanderson B, Boesch L. Reaching agreement for an Aboriginal e-health research agenda: the Aboriginal Telehealth Knowledge Circle consensus method. Rural Remote Health 2010;10:1299.
Machin D, Stenning SP, Parmar MK, Fayers PM, Girling DJ, Stephens RJ, et al. Thirty years of Medical Research Council randomized trials in solid tumours. Clin Oncol 1997;9:100–14.
Maconochie H, McNeill F. User involvement: children's participation in a parent-baby group. Community Pract 2010;83:17–20.
Magill MK, Day J, Mervis A, Donnelly SM, Parsons M, Baker AN, et al. Improving colonoscopy referral rates through computer-supported, primary care practice redesign. J Healthcare Qual 2009;31:43.
Magnusson L, Hanson E. Supporting frail older people and their family carers at home using information and communication technology: cost analysis. J Adv Nurs 2005;51:645–57.
Magrabi F, Coiera EW, Westbrook JI, Gosling AS, Vickland V. General practitioners’ use of online evidence during consultations. Int J Med Informat 2005;74:1–12.
Maher D, Harries AD, Nachega JB, Jaffar S. Methodology matters: What type of research is suitable for evaluating community treatment supporters for HIV and tuberculosis treatment? Trop Med Int Health 2012;17:264–71.
Maher EJ, Timothy A, Squire CJ, Goodman A, Karp SJ, Paine CH, et al. The Royal College of Radiologists’ report. Audit: the use of radiotherapy for NSCLC in the UK. Clin Oncol 1993;5:72–9.
Mahmoud AO, Ayanniyi AA, Lawal A, Omolase CO, Ologunsua Y, Samaila E. Ophthalmic research priorities and practices in Nigeria: an assessment of the views of nigerian ophthalmologists. Middle East Afr J Ophthalmol 2011;18:164–9.
Makhija SK, Gilbert GH, Rindal DB, Benjamin PL, Richman JS, Pihlstrom DJ, et al. Dentists in practice-based research networks have much in common with dentists at large: evidence from the Dental Practice-Based Research Network. Gen Dent 2009;57:270–5.
Mandl LA, Solomon DH, Smith EL, Lew RA, Katz JN, Shmerling RH. Using antineutrophil cytoplasmic antibody testing to diagnose vasculitis: can test-ordering guidelines improve diagnostic accuracy? Arch Intern Med 2002;162:1509–14.
Mannion R, Goddard M. Impact of published clinical outcomes data: case study in NHS hospital trusts. BMJ (Clin Res Ed.) 2001;323:260–3.
Manns BJ, Mortis GP, Taub KJ, Mclaughlin K, Donaldson C, Ghali WA. The Southern Alberta Renal Program database: a prototype for patient management and research initiatives. Clin Invest Med 2001;24:164–70.
Marich MJ. Toward a high performance architecture for time-critical information services: sequential case studies and action research of regional EMS systems. Dissertation Abstr Int Section A Humanities Soc Sci 2009;69:3369.
Martin BD. Clinical quality measures and the incidence rate of invasive methicillin-resistant Staphylococcus aureus (IMRSA) in Tidewater Virginia. Dissertation Abstr Int Section B Sci Engine 2011;72:2677.
Matamala MI. Gender-related indicators for the evaluation of quality of care in reproductive health services. Reprod Health Matters 1998;6:10–21.
Mattice M. Parse's theory of nursing in practice: a manager's perspective. Can J Nurs Admin 1991;4:11–13.
McCloskey DJ. The relationship between organizational factors and nurse factors affecting the conduct and utilization of nursing research. Ph.D. doctoral dissertation, George Mason University. 2005.
McCourt B, Harrington RA, Fox K, Hamilton CD, Booher K, Hammond WE, et al. Data standards: at the intersection of sites, clinical research networks, and standards development initiatives. Drug Inform J 2007;41:393–404.
McCoy JM, Byers JF. Using evidence-based performance improvement in the community hospital setting. J Healthcare Qual 2006;28:13–17.
McEwan AB, Tsey K, McCalman J, Travers HJ. Empowerment and change management in Aboriginal organisations: a case study. Aus Health Rev 2010;34:360–7.
McKellar L, Pincombe JI, Henderson AN. Action research: a process to facilitate collaboration and change in clinical midwifery practice. Evid Based Midwifery 2010;8:85–90.
McQueen L, Mittman BS, Demakis JG. Overview of the Veterans Health Administration (VHA) Quality Enhancement Research Initiative (QUERI). J Am Med Informat Asso 2004;11:339–43.
McWilliam CL. Using a participatory research process to make a difference in policy on aging. Can J Aging 1997;16:70–89.
Melnyk BM, Western Institute Of Nursing (U.S.). The “so what” factor in a time of healthcare reform: conducting research & EBP projects that impact healthcare quality, cost and patient outcomes. Communicating Nursing Research Conference and WIN Assembly, 2011 Rio All-Suites Hotel, Las Vegas, Nevada. Portland, Western Institute of Nursing, 59–69.
Menke TJ, Ashton CM, Krumhaus S. Research organizations in the Department of Veterans Affairs that focus on database analysis. Med Care 1996;34:MS103–10.
Messmer PR, Jones SG, Rosillo C. Using nursing research projects to meet magnet recognition program standards. J Nurs Admin 2002;32:538–43.
Minasian LM, Carpenter WR, Weiner BJ, Anderson DE, McCaskill-Stevens W, Nelson S, et al. Translating research into evidence-based practice: the National Cancer Institute Community Clinical Oncology Program. Cancer 2010;116:4440–9.
Mitchell EA, Conlon A-M, Armstrong M, Ryan AA. Towards rehabilitative handling in caring for patients following stroke: a participatory action research project. J Clin Nurs 2005;14(Suppl. 1):3–12.
Mitton C, Adair CE, McKenzie E, Patten SB, Waye B, Adair E. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q 2012;85:729–68.
Mold JW, Cacy DS, Dalbir DK. Management of laboratory test results in family practice: an OKPRN study. J Family Pract 2000;49:709–15.
Mold JW, Peterson KA. Primary care practice-based research networks: working at the interface between research and quality improvement. Ann Family Med 2005;3:12–20.
Mostert SP, Ellenbroek SP, Meijer I, Van Ark G, Klasen EC. Societal output and use of research performed by health research groups. Health Res Policy Syst 2010;8:30.
Moxham L, Dwyer T, Happell B, Reid-Searl K, Kahl J, Morris J, et al. Recognising our role: improved confidence of general nurses providing care to young people with a mental illness in a rural paediatric unit. J Clin Nurs 2010;19:1434–42.
Munn-Giddings C, McVicar A, Smith L. Systematic review of the uptake and design of action research in published nursing research, 2000–2005. J Res Nurs 2008;13:465–77.
Nagykaldi Z, Mold JW, Aspy CB. Practice facilitators: a review of the literature. Family Med 2005;37:581–8.
Nash L, Gorrell J, Cornish A, Rosen A, Miller V, Tennant C. Clinical outcome of an early psychosis intervention program: evaluation in a real-world context. Aus N Z J Psychiatr 2004;38:694–701.
Neely O. Advancing research. Pract Nurs 2009;20:104.
Nelson EC, Splaine ME, Godfrey MM, Kahn V, Hess A, Batalden P, et al. Using data to improve medical practice by measuring processes and outcomes of care. Jt Comm J Qual Improv 2000;26:667–85.
Niederhauser VP, Kohr L. Research endeavors among pediatric nurse practitioners (REAP) study. J Pediatr Health Care 2005;19:80–9.
Norman CD, Huerta T. Knowledge transfer & exchange through social networks: building foundations for a community of practice within tobacco control. Implementation Sci 2006;1:20.
Oh CH. Explaining the impact of policy information on policy-making. Knowledge Policy 1997;10:25–55.
Ohldin A, Taylor R, Stein A, Garthwaite T. Enhancing VHA's mission to improve veteran health: synopsis of VHA's Malcolm Baldrige award application. Qual Manag Health Care 2002;10:29–37.
Ohrn K, Olsson C, Wallin L. Research utilization among dental hygienists in Sweden – a national survey. Int J Dent Hyg 2005;3:104–11.
Oliver A. The Veterans Health Administration: an American success story? Milbank Q, 2007;85:5–35.
Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLOS ONE 2011;6:e21704.
Otto F, Arnhold-Kerri S. [Quality management and practice-oriented research in a clinic-network of mother-/father-child rehabilitation centres.] Rehabilitation 2010;49:105–13.
Ouimet M, Landry R, Ziam S, Bédard P-O. The absorption of research knowledge by public civil servants. Evid Policy J Res Debate Prac 2009;5:331–50.
Palmer HR, Weston J, Gentry L, Salazar M, Putney K, Frost C, et al. Improving patient care through implementation of an antimicrobial stewardship program. Am J Health Syst Pharm 2011;68:2170–4.
Panik A, Bokovoy J, Karoly E, Badillo K, Beckenmyer C, Vose C, et al. Research on the frontlines of healthcare: a cooperative learning approach. Nurs Res 2006;55:S3–9.
Papanikolaou PN, Christidi GD, Ioannidis JPA. Patient outcomes with teaching versus nonteaching healthcare: a systematic review. PLOS Med 2006;3:e341.
Parente ST, Phelps CE, O’Connor PJ. Economic analysis of medical practice variation between 1991 and 2000: the impact of patient outcomes research teams (PORTs). Int J Technol Assess Health Care 2008;24:282–93.
Patel V. Health systems research: a pragmatic model for meeting mental health needs in low-income countries. In Andrews G, Henderson S, editors. Unmet need in psychiatry. New York, NY: Cambridge University Press US; 2000.
Pater J, Rochon J, Parmar M, Selby P. Future research and methodological approaches. Ann Oncol 2011;22:vii57–vii61.
Patriarca F, Fanin R, Silvestri F, Russo D, Baccarani M. Multiple myeloma: presenting features and survival according to hospital referral. Leukemia Lymphoma 1998;30:551–62.
Pearson M, Anthony ZB, Buckley NA. Prospective policy analysis: how an epistemic community informed policymaking on intentional self poisoning in Sri Lanka. Health Res Policy Syst 2010;8:19.
Pelletier KR. A review and analysis of the clinical- and cost-effectiveness studies of comprehensive health promotion and disease management programs at the worksite: 1998–2000 update. Am J Health Promot 2001;16:107–16.
Perli S, Suriani C, Pari-Etld G. [The participation in a clinical trial on pressure ulcer as an opportunity for education.] Assistenza infermieristica e ricerca 2004;23:209–11.
Perren S. Being involved in a research study – a counsellor's view. Couns Psychother Res 2033;3:246–8.
Perrier L, Mrklas K, Lavis JN, Straus SE. Interventions encouraging the use of systematic reviews by health policymakers and managers: a systematic review. Implementation Sci 2011;6:43.
Peterson E, Shatin D, McCarthy D. Health services research at United Healthcare Corporation: the role of the center for health care policy and evaluation. Med Care 1996;53:S65–76.
Petersson P, Springett J, Blomqvist K. Telling stories from everyday practice, an opportunity to see a bigger picture: a participatory action research project about developing discharge planning. Health Soc Care Community 2009;17:548–56.
Pillar B, Jarjoura D. Assessing the impact of reengineering on nursing. J Nurs Adm 1999;29:57–64.
Pollitt A, Wooding S, Hanney S, Buxton M, Grant J. Project retrosight – understanding the returns from cardiovascular and stroke research – Methodology Report. Santa Monica, CA: RAND Corporation; 2011.
Porter J, Parsons S, Robertson C. Time for review: supporting the work of an advisory group. J Res Spec Educ Needs 2006;6:11–16.
Powell AE, Davies HTO, Bannister J, Macrae WA. Understanding the challenges of service change – learning from acute pain services in the UK. J R Soc Med 2009;102:62–8.
Pratt RJ, Pellowe CM, Juvekar SK, Potdar NS, Weston AJ, Joykutty A, et al. Kaleidoscope: a 5-year action research project to develop nursing confidence in caring for patients with HIV disease in west India. Int Nurs Rev 2001;48:164–73.
Purushotham AD, Pain SJ, Miles D, Harnett A. Reviews Variations in treatment and survival in breast cancer. Lancet Oncol 2001;44:719.
Render ML, Freyberg RW, Hasselbeck R, Hofer TP, Sales AE, Deddens J, et al. Infrastructure for quality transformation: measurement and reporting in veterans administration intensive care units. BMJ Qual Saf 2011;20:498–507.
Resnick B. The difference nurses are making to improve quality of care to older adults through the interdisciplinary Nursing Quality Research Initiative. Geriatr Nurs 2010;31:157–64.
Reutter L, Stewart MJ, Raine K, Williamson DL, Letourneau N, McFall S. Partnerships and participation in conducting poverty-related health research. Prim Health Care Res Develop 2005;6:356–66.
Robinson A, Street A. Improving networks between acute care nurses and an aged care assessment team. J Clin Nurs 2004;13:486–96.
Rosenfeld P, Duthie E, Bier J, Bowar-Ferres S, Fulmer T, Iervolino L, et al. Engaging staff nurses in evidence-based research to identify nursing practice problems and solutions. Appl Nurs Res 2000;13:197–203.
Ross S, Lavis J, Rodriguez C, Woodside J, Denis J-L. Partnership experiences: involving decision-makers in the research process. J Health Serv Res Policy 2003;8(Suppl. 2):26–34.
Rowley E, Morriss R, Currie G, Schneider J. Research into practice: Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for Nottinghamshire, Derbyshire, Lincolnshire (NDL). Implementation Sci 2010;7:40.
Rubenstein LV, Chaney EF, Ober S, Felker B, Sherman SE, Lanto A, et al. Using evidence-based quality improvement methods for translating depression collaborative care research into practice. Families Syst Health 2010;28:91–113.
Ryan G, Berrebi C, Beckett M, Taylor S, Quiter E, Cho M, et al. Reengineering the clinical research enterprise to involve more community clinicians. Implementation Sci 2011;6:36.
Ryba NL, Clayton SR. Narrowing the gap: How a research intervention influenced clinical forensic practice. J Forensic Psychol Prac 2007;7:19–36.
Rycroft-Malone J, McCormack B, Hutchinson AM, Decorby K, Bucknall TK, Kent B, et al. Realist synthesis: illustrating the method for implementation research. Implementation Sci 2012;7:33.
Sachdeva R, Williams T, Quigley J. Mixing methodologies to enhance the implementation of healthcare operational research. J Operational Res Soc 2007;58:159–67.
Sankaranarayanan R, Sauvaget C, Ramadas K, Ngoma T, Teguete I, Muwonge R, et al. Clinical trials of cancer screening in the developing world and their impact on cancer healthcare. Ann Oncol 2011;22:vii, 20–8.
Sarri RC, Sarri CM. Organizational and community change through participatory action research. Admin Soc Work 1992;16:99–122.
Schreyogg J, Von Reitzenstein C. Strategic groups and performance differences among academic medical centers. Health Care Manage Rev 2008;33:225–33.
Scott SD, Plotnikoff RC, Karunamuni N, Bize R, Rodgers W. Factors influencing the adoption of an innovation: an examination of the uptake of the Canadian Heart Health Kit (HHK). Implementation Sci 2008;3:41.
Scott T, Mannion R, Marshall M, Davies H. Does organisational culture influence health care performance? A review of the evidence. J Health Serv Res Policy 2003;8:105–17.
Selby P, Autier P. The impact of the process of clinical research on health service outcomes. Ann Oncol 2011;22:vii5–vii9.
Sett A, Mistri AK. A dramatic impact of the CLOTS Studies on clinical practice in the UK. Int J Stroke 2011;6:15.
Slora EJ, Bocian AB, Finch SA, Wasserman RC. Pediatric research in office settings at 25: A quarter century of network research toward the betterment of children's health. Curr Probl Pediatr Adolesc Health Care 2011;41:286–92.
Soh KL, Davidson PM, Leslie G, Bin Abdul Rahman A. Action research studies in the intensive care setting: a systematic review. Int J Nurs Stud 2011;48:258–68.
Somkin CP, Altschuler A, Ackerson L, Geiger AM, Greene SM, Mouchawar J, et al. Organizational barriers to physician participation in cancer clinical trials. Am J Manag Care 2005;11:413–21.
Soumerai SB, Ross-Degnan D, Fortess EE, Walser BL. Determinants of change in Medicaid pharmaceutical cost sharing: does evidence affect policy? Milbank Q 1997;75:11–34.
Souza NM, Sebaldt RJ, Mackay JA, Prorok JC, Weise-Kelly L, Navarro T, et al. Computerized clinical decision support systems for primary preventive care: a decision–maker–researcher partnership systematic review of effects on process of care and patient outcomes. Implementation Sci 2011;6:87.
Squires JE, Estabrooks CA, Gustavsson P, Wallin L. Individual determinants of research utilization by nurses: a systematic review update. Implementation Sci 2011;6:1.
Squires JE, Moralejo D, Lefort SM. Exploring the role of organizational policies and procedures in promoting research utilization in registered nurses. Implementation Sci 2007;2:17.
Stanley R, Lillis KA, Zuspan SJ, Lichenstein R, Ruddy RM, Gerardi MJ, et al. Development and implementation of a performance measure tool in an academic pediatric research network. Contemp Clin Trials 2010;31:429–37.
Stetler CB, McQueen L, Demakis J, Mittman BS. An organizational framework and strategic implementation for system-level change to enhance research-based practice: QUERI Series. Implementation Sci 2008;3:30.
Stetler CB, Mittman BS, Francis J. Overview of the VA Quality Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI Series. Implementation Sci 2008;3:8.
Stetler CB, Ritchie JA, Rycroft-Malone J, Schultz AA, Charns MP. Institutionalizing evidence-based practice: an organizational case study using a model of strategic change. Implementation Sci 2009;4:78.
Stiller CA. Centralised treatment, entry to trials and survival. Br J Cancer 1994;70:352–62.
Straus SE, Brouwers M, Johnson D, Lavis JN, Légaré F, Majumdar SR, et al. Core competencies in the science and practice of knowledge translation: description of a Canadian strategic training initiative. Implementation Sci 2011;6:127.
Subramanian U, Sutherland J, McCoy KD, Welke KF, Vaughn TE, Doebbeling BN. Facility-level factors influencing chronic heart failure care process performance in a national integrated health delivery system. Med Care 2007;45:28–45.
Suderman EM, Deatrich JV, Johnson LS, Sawatzky-Dickson DM. Action research sets the stage to improve discharge preparation. Pediatr Nurs 2000;26:571–6.
Sullivan R. Has the US Cancer Centre model been ‘successful’? Lessons for the European cancer community. Mol Oncol 2009;3:192–203.
Tanabe P, Gisondi M, Barnard C, Lucenti MJ, Cameron K. Can education and staff-based participatory research change nursing practice in an era of ED overcrowding? A focus group study. J Emerge Nurs 2009;35:290–8.
Teal R, Bergmire DM, Johnston M, Weiner BJ. Implementing community-based provider participation in research: an empirical study. Implementation Sci 2012;7:41.
Tennille J, Solomon P, Blank M. Case managers discovering what recovery means through an HIV prevention intervention. Community Mental Health J 2010;46:486–93.
Tetroe JM, Graham ID, Foy R, Robinson N, Martin P, Wensing M, et al. Health Research Funding Agencies‘ Support and Promotion of Knowledge Translation: An International Study. Milbank Q 2008;86:125–55.
Theobald S, Taegtmeyer M, Squire SB, Crichton J, Simwaka BN, Thomson R, et al. Towards building equitable health systems in Sub-Saharan Africa: lessons from case studies on operational research. Health Res Policy Syst 2009;7:26.
Thomas P, Graffy J, Wallace P, Kirby M. How primary care networks can help integrate academic and service initiatives in primary care. Ann Family Med Res 2006;4:235–9.
Tranmer JE, Lochhaus-Gerlach J, Lam M. The effect of staff nurse participation in a clinical nursing research project on attitude towards, access to, support of and use of research in the acute care setting. Can J Nurs Leader 2002;15:18–26.
Trivedi AN, Matula S, Miake-Lye I, Glassman PA, Shekelle P, Asch S. Systematic review: comparison of the quality of medical care in Veterans Affairs and non-Veterans Affairs settings. Med Care 2011;49:76–88.
Trostle J, Bronfman M, Langer A. How do researchers influence decision-makers? Case studies of Mexican policies. Health Policy Plan 1999;14:103–14.
Tsai SL. Nurses’ participation and utilization of research in the Republic of China. Int J Nurs Stud 2000;37:435–44.
Ullrich S, McCutcheon H, Parker B. Reclaiming time for nursing practice in nutritional care: outcomes of implementing Protected Mealtimes in a residential aged care setting. J Clin Nurs 2011;20:1339–48.
Van De Vall M, Bolas C. Using Social Policy Research for Reducing Social Problems: An Empirical Analysis of Structure and Functions. J Appl Behav Sci 1 1982;18:49–66.
Van Gijn W, Krijnen P, Lemmens VEPP, Den Dulk M, Putter H, Van De Velde CJH. Quality assurance in rectal cancer treatment in the Netherlands: a catch up compared to colon cancer treatment. Eur J Surg Oncol 2010;36:340–4.
Van Weel C, De Grauw W. Family practices registration networks contributed to primary care research. J Clin Epidemiol 2006;59:779–83.
Vaughn TE, McCoy KD, Bootsmiller BJ, Woolson RF, Sorofman B, Tripp-Reimer T, et al. Organizational predictors of adherence to ambulatory care screening guidelines. Med Care 2002;40:1172–85.
Velasquez J, Knatterud-Hubinger N, Narr D, Mendenhall T, Solheim C. Mano a Mano: improving health in impoverished Bolivian communities through community-based participatory research. Families Syst Health 2011;29:303–13.
Vlasic W, Almond D. Research-based practice: reducing bedrest following cardiac catheterization. Can J Cardiovasc Nurs 1999;10:19–22.
Walker G, Poland F. Intermediate care for older people: using action research to develop reflective nursing practice in a rehabilitation setting. Qual Ageing 2000;1:31–43.
Wallis M, Tyson S. Improving the nursing management of patients in a hematology/oncology day unit: an action research project. Cancer Nurs 2003;26:75–83.
Walshe K, Rundall TG. Evidence-based management: from theory to practice in health care. Milbank Q 2001;79: iv–v, 429–57.
Ward MM, Vaughn TE, Uden-Holman T, Doebbeling BN, Clarke WR, Woolson RF. Physician knowledge, attitudes and practices regarding a widely implemented guideline. J Eval Clin Prac 2002;8:155–62.
Ward V, House A, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy 2009;14:156–64.
Waterman H, Harker R, MacDonald H, Mclaughlan R, Waterman C. Evaluation of an action research project in ophthalmic nursing practice. J Adv Nurs 2005;52:389.
Wathen CN, Sibbald SL, Jack SM, Macmillan HL. Talk, trust and time: a longitudinal study evaluating knowledge translation and exchange processes for research on violence against women. Implementation Sci 2011;6:102.
Webb B. Using focus groups as a research method: a personal experience. J Nurs Manag 2002;10:27–35.
Weiner BJ. A theory of organizational readiness for change. Implementation Sci 2009;4:67.
Wells WA, Brooks A. Adoption of new health products in low and middle income settings: how product development partnerships can support country decision making. Health Res Policy Syst 2011;9:15.
Wensing M, Van Lieshout J, Koetsenruiter J, Reeves D. Information exchange networks for chronic illness care in primary care practices: an observational study. Implementation Sci 2010;5:3.
Whicher DM, Chalkidou K, Dhalla IA, Levin L, Tunis S. Comparative effectiveness research in Ontario, Canada: producing relevant and timely information for health care decision makers. Milbank Q, 2009;87:585–606.
Whitehead D, Keast J, Montgomery V, Hayman S. A preventative health education programme for osteoporosis. J Adv Nurs 2004;47:15–24.
Williams RL, Rhyne RL. No longer simply a practice-based research network (PBRN) health improvement networks. J Am Board Family Med 2011;24:485–8.
Williamson T. Commentary on Glasson J, Chang E, Chenoweth L, Hancock K, Hall T, Hill-Murray F, Collier L. Evaluation of a model of nursing care for older patients using participatory action research in an acute medical ward (J Clin Nurs 2006;15:588–5). J Clin Nurs 2007;16:1981–2.
Wilson MG, Rourke SB, Lavis JN, Bacon J, Travers R. Community capacity to acquire, assess, adapt, and apply research evidence: a survey of Ontario‘ s HIV/AIDS sector. Implementation Sci 2011;28:54.
Wilson PM, Petticrew M, Calnan MW, Nazareth I. Disseminating research findings: what should researchers do? A systematic scoping review of conceptual frameworks. Implementation Sci 2010;5:91.
Wilson PM, Petticrew M, Calnan MW, Nazareth I. Does dissemination extend beyond publication: a survey of a cross section of public funded research in the UK. Implementation Sci 2010;5:61.
Wimpenny K, Forsyth K, Jones C, Evans E, Colley J. Group reflective supervision: thinking with theory to develop practice. Br J Occup Ther 2006;69:423–8.
Woelk G, Daniels K, Cliff J, Lewin S, Sevene E, Fernandes B, et al. Translating research into policy: lessons learned from eclampsia treatment and malaria control in three southern African countries. Health Res Policy Syst 2009;7:31.
Wolstenholme E. Organizational re-engineering using systems modelling: rediscovering the physics of the health service. IMA J Math Appl Med Biol 1995;12:283–96.
Wooding S, Hanney S, Pollitt A, Buxton M, Grant J. Project retrosight: understanding the returns from cardiovascular and stroke research: the policy report. Santa Monica, CA: RAND Corporation; 2011.
Wu STC. Collaborative action research in child health promotion: a school nursing experience. Ph.D., Hong Kong: Polytechnic University (Hong Kong); 2006.
Wu YW, Crosby F, Ventura M, Finnick M. In a changing world: database to keep the pace. Clin Nurse Spec 1994;8:104–8.
Yano EM. The role of organizational research in implementing evidence-based practice: QUERI Series. Implementation Sci 2008;3:29.
Young D. The effect of an early childhood caregiver education program on primary caregivers in rural Nicaragua. Dissertation Abstr Int Section A Humanities Soc Sci 2009;70:1552.
Zahra SA, George G. Absorptive capacity: a review, reconceptualization, and extension. Acad Manag Rev 2002;27:185–203.
List of abbreviations
- ACOSOG
- American College of Surgeons Oncology Group
- AHRQ
- Agency for Healthcare Research and Quality
- ASSIA
- Applied Social Sciences Index and Abstracts
- AUKUH
- Association of UK University Hospitals
- CBPPR
- community-based provider participation in research
- CCOP
- Community Clinical Oncology Program
- CHSRF
- Canadian Health Services Research Foundation
- CIHR
- Canadian Institutes of Health Research
- CINAHL
- Cumulative Index to Nursing and Allied Health Literature
- CLAHRC
- Collaborations for Leadership in Applied Health Research and Care
- CME
- continuing medical education
- CQI
- continuous quality improvement
- CQRS
- Quebec Social Research Council
- CTN
- Clinical Trials Network
- EBP
- evidence-based practice
- HMIC
- Health Management Information Consortium
- HTA
- Health Technology Assessment
- IARC
- International Agency for Research on Cancer
- IDSRN
- Integrated Delivery Systems Research Network
- ITT
- invitation to tender
- MRC
- Medical Research Council
- NCI
- National Cancer Institute
- NIH
- National Institutes of Health
- NIHR
- National Institute for Health Research
- PAR
- participatory action research
- PBRN
- practice-based research network
- PI
- performance indicator
- QI
- quality improvement
- QUERI
- Quality Enhancement Research Initiative
- R&D
- research and development
- RAND
- RAND Corporation
- RCT
- randomised controlled trial
- SDO
- Service Delivery and Organisation
- SIGLE
- System for Information on Grey Literature in Europe
- VA
- United States Department of Veterans Affairs
- VHA
- Veterans Health Administration