Skip to main content

Measurement resources for dissemination and implementation research in health

Abstract

Background

A 2-day consensus working meeting, hosted by the United States National Institutes of Health and the Veterans Administration, focused on issues related to dissemination and implementation (D&I) research in measurement and reporting. Meeting participants included 23 researchers, practitioners, and decision makers from the USA and Canada who concluded that the field would greatly benefit from measurement resources to enhance the ease, harmonization, and rigor of D&I evaluation efforts. This paper describes the findings from an environmental scan and literature review of resources for D&I measures.

Findings

We identified a total of 17 resources, including four web-based repositories and 12 static reviews or tools that attempted to synthesize and evaluate existing measures for D&I research. Thirteen resources came from the health discipline, and 11 were populated from database reviews. Ten focused on quantitative measures, and all were generated as a resource for researchers. Fourteen were organized according to an established D&I theory or framework, with the number of constructs and measures ranging from 1 to more than 450. Measure metadata was quite variable with only six providing information on the psychometric properties of measures.

Conclusions

Additional guidance on the development and use of measures are needed. A number of approaches, resources, and critical areas for future work are discussed. Researchers and stakeholders are encouraged to take advantage of a number of funding mechanisms supporting this type of work.

Peer Review reports

Background

Measurement issues often threaten the evolution of new fields [1]. Recent reviews suggest that fewer than 50 % of existing dissemination and implementation (D&I) research measures are psychometrically validated (i.e., in many cases, no data exists on whether the measure assesses the construct it is intended to address; [24]). In addition to psychometric quality, information about pragmatic quality, including clinical or operational utility, is gaining ground as an important measurement dimension [5], particularly for advancing the practice of D&I [6]. Additional challenges include the dearth of measures available for certain D&I constructs (e.g., context, adaptation) [7]. However, perhaps the most critical challenge is the combination of these issues: the apparent lack of pragmatic and high-quality measures for key constructs and the disparate use of measures across studies, which inhibits the integration of results from observational and interventional studies conducted across multiple sectors that examine health behaviors and outcomes.

D&I research is especially vulnerable to communication barriers that may exacerbate these measurement issues given the rapid spread of the field across numerous disciplines within and outside of healthcare (e.g., healthcare, mental health, public health, education). A likely issue is that D&I scientists working in clinical medicine or healthcare may not be aware of measures available from those working in public health or mental health, for example. Without shared measures resources, the field is vulnerable to redundancies in measure development and missed opportunities to use common measures across studies, with the ultimate consequence being an artificially fractured knowledge base and inefficient efforts to advance the field.

From 2007 to 2012, the USA (US) National Institutes of Health (NIH) held five large conferences on D&I research, choosing themes that would call attention to aspects of the field for which advances were particularly needed. While the large meetings, including one specifically focusing on research methods and measurement, were able to spotlight the “state-of-the-science” across different domains to guide development, there was limited opportunity to directly fill the gaps that speakers had identified. In 2013, the NIH developed a series of three meetings with the purpose to convene working groups of leaders in D&I research to identify gaps, articulate key next steps, and locate potential tools for the field related to (1) training, (2) study design, and (3) measurement and standardized reporting. On October 23–24, 2013, the working group on measurement and standardized reporting, including 23 representatives from large-scale efforts to synthesize and evaluate D&I measures—including from the Society for Implementation Research Collaboration (SIRC), the US National Cancer Institute (NCI) Grid-Enabled Measures D&I campaign, and the affiliated NIH Clinical and Translational Science Award Community-Engaged Research and Comparative Effectiveness Research measurement effort—took on the challenge of assessing the state of D&I measures and of identifying mechanisms to improve standardized reporting across studies. In bringing together this group of scientists, it became quickly apparent that cross-talk between different research areas was lacking, not only with respect to the use of similar measures but also in terms of knowledge of measure resources that could promote ease of measure identification, selection, and harmonization. Thus, a subgroup of scientists from this meeting aimed to locate existing measure resources to share with D&I-engaged scientists and to reveal action steps that emerged from the meeting with a focus on measure development grants as a potential avenue for filling the obvious gaps in the field.

Findings

Method

Review objective and scope

The primary goal of this paper is to provide a review of existing measure resources relevant to D&I research and to describe their characteristics and possible use by researchers and other practitioners/end-users. Specifically, the target was resources that provided information about D&I measures (i.e., websites or systematic reviews that synthesized information about existing D&I measures), not individual D&I measures as these are captured within the resources we sought to identify. Measure resources include living repositories (e.g., websites and wiki pages) and static resources (e.g., systematic and scoping reviews).

Resource identification

To increase the comprehensiveness and exhaustiveness of the search, we employed a two-step search process that concluded in May 2015. First, we conducted an environmental scan using a respondent-driven, non-probabilistic sampling approach to identify key informants who could help us identify additional resources beyond the peer-reviewed literature, in the gray literature or in the development stages. This approach, which leverages the informational power of social networks, can augment traditional environmental scans and literature reviews in situations in which the searched-for items (i.e., measures resources) are not clearly and consistently indexed with standard terms in bibliographic databases. The following listservs were accessed: the SIRC, the Association for Behavioral and Cognitive Therapy Dissemination and Implementation Science Special Interest Group, and the Implementation Network listserv. We also searched websites and electronic newsletters for additional resources.

Second, a review (scoping and systematic) of the published literature was conducted using two approaches. First, an initial set of publications (reviews of dissemination and/or implementation measures) was identified through recommendations from attendees of the NIH meeting. Then, a systematic review of the literature was completed using two search engines (PubMed, Web of science) to identify papers published between 2000 and 2014 in English language using a set of search term combinations (dissemination/implementation + measure/measurement/instrument/scale/evaluation + review). Titles and abstracts were filtered for reviews (both systematic and non-systematic) of dissemination and/or implementation measures.

Resource inclusion/exclusion

Five inclusion/exclusion criteria were set. First, the resources needed to include measures related explicitly to D&I. Therefore, resources that included information about measures used to evaluate D&I outcomes were included, whereas resources focused solely on quality improvement or quality of care measures or patient-level health outcomes were excluded. Second, resources were excluded if the investigative team was unable to access the resource beyond its cited name or if the resource was not yet fully developed. Third, for static resources, we included published reviews (systematic or not) that focused on one or multiple D&I-relevant constructs, which includes D&I outcomes (e.g., adoption, sustainment; [8]) or factors implicated in the D&I process (e.g., leadership, climate; [9]). Fourth, only reviews were included from the published literature. Finally, resources that discussed either/both quantitative and qualitative measures were included.

Data extraction

The focus of the data extraction was collaboratively developed to obtain useful summary information that could be gleaned from the resources and would be applicable across resources, to ultimately aid researchers and stakeholders in determining the resource of most relevance. The data extraction resulted in 13 unique pieces of data that reflected both quantitative and qualitative information about the resources including characterizing features of the resource (organizing framework, audience, discipline/scope, type of measure, measure identification approach, resource status, and access) and summary data of the measures information (number of constructs, number of measures, if the measures are included in the resource, measure metadata, psychometric information, pragmatic rating, and analysis level).

Once the resource sample and the data extraction process were finalized, the data extraction was completed by two independent research assistants trained by the first authors (BR and CCL). Research assistants independently extracted data from each resource and then met for consensus in order to achieve one set of summary data for each resource [10]. When consensus could not be achieved, the first authors were consulted to make a final determination.

Results

A total of 17 measure resources were included in the review and subjected to data extraction to obtain summary information that may aid end-users in identifying and selecting quantitative and qualitative measures related to D&I (see Tables 1 and 2). Twelve resources were static reviews and five were web-based resources, the latter of which are reported to be “living” in that they continue to be updated with the literature base. Fourteen of the 17 resources are publicly available, requiring no membership or application process to view or use. For seven of these resources, this also means that the measures themselves are publicly available. The majority of the identified resources are accessible at no cost, except for the two reviews that are not published in open-access journals and one web-based resource that requires paid membership but for which the results are also available in an open source peer-reviewed publication [2].

Table 1 Characterizing features of measures resources
Table 2 Summary data of measures information

Three resources came from mental health, one from business, and the remaining from health disciplines. Eleven reported identifying measures from databases, of which five combined this search strategy with a second approach (e.g., expert review, snowball sampling); the remaining drew from experts, snowball sampling, crowd sourcing, or did not specify their search approach (n = 6). Ten provided information on quantitative measures only, seven on both quantitative plus qualitative. All resources were developed for use by researchers with implementation practitioners as the next most commented target audience (n = 9).

Fourteen resources were organized according to an established D&I theory or framework. Most notably, the Consolidated Framework for Implementation Research (CFIR) [9] was represented in three of the resources. The number of constructs ranged from 1 to 359. A similar broad range was observed in the number of measures: 1 to >450. Although only nine resources provided the measure, 11 resources provided information regarding measure citations to promote ease of access. The amount and type of information provided regarding both the measures themselves (i.e., metadata) and the development and validation of the measures was also quite variable. For example, six out of the 17 resources included reliability and validity information and only three resources provided information about pragmatic measure qualities. Ten provided measures targeting consumers and 14 targeting providers.

Discussion

In this short report, we described 17 resources that synthesized and evaluated existing measures for D&I research. These resources included four web-based repositories and 13 static reviews or tools, each providing varying levels of measure metadata. The summary of the resources provided herein can be used as a starting point for researchers and other stakeholders (e.g., implementation practitioners, administrators) intending to identify measures for D&I studies. In the case of interactive resources in which crowd sourcing of data and experiences is encouraged, end-users can share experiences and additional measure metadata. Taken together, these can facilitate a culture of measure harmonization and data comparison across studies [11]. For example, the SIRC Instrument Review Project provides expert-informed rating of measures organized along the CFIR [9] and the Implementation Outcomes Framework [8] making the identification of scientifically sound D&I measurement convenient [2]. Another interactive instrument, the D&I Workspace in the Grid-enabled Measures Database uses a crowd sourcing approach to populate, update, rate, and comment on measures that are organized around critical D&I constructs [12]. Active participation from researchers and practitioners in the development and refinement of these and other interactive resources for D&I measures is critical for achieving their ultimate goal of being living and relevant resources for the D&I community.

One important shortcoming of these resources and the D&I field in general was identified through a conceptual framework that emerged as one of the main products of the NIH D&I measurement and reporting standards working meeting. This framework revealed a number of D&I-relevant constructs (e.g., context, sustainability, evolution) which lack appropriate measures as well as constructs for which measures exist but are not commonly used (e.g., cost of intervention, adoption, implementation strategy) [7]. For these areas, additional measure development and guidance on the development and use of measures are needed. A number of approaches and resources are in place to support additional measure development.

  1. 1.

    Generation of single-use measures (i.e., developed “in-house” for use in a specific setting or context) remains the status quo [1]; however, informed by the working meeting, an effort is underway to generate pragmatic measures with strong psychometrical properties of three implementation outcomes that predict adoption for use across studies as well as a replicable measure development process [6].

  2. 2.

    A web-based interactive tool (another product emerging from this NIH D&I meeting) provides guidance for the selection, adaptation, and integration of D&I models and also allows for the linkage of model constructs to existing measures [13].

  3. 3.

    A working group, Qualitative Research in Implementation Science (QUALRIS) was assembled by the NCI’s Implementation Science team including national experts in qualitative and mixed methods research to develop guidelines and standards for the use of qualitative methods for D&I research (S. Heurtin-Roberts, personal communication, September 28, 2015).

  4. 4.

    Funding mechanisms for the advancement of D&I measurement are in place. There are three main venues through which researchers in the USA have been supported for measure development (examples for each venue are provided in Additional file 1):

    1. I.

      Research funding announcements have included an explicit focus on measure development as the major activity within a grant or contract. For example, the standing NIH D&I program announcements have consistently called for “Development of D&I-relevant outcome and process measures and suitable methodologies for dissemination and implementation approaches that accurately assess the success of an approach to move evidence into practice” (NIH, PAR-13-055, PAR-13-054, PAR-13-056). Grant applications could thus propose to develop and test a novel D&I instrument as the central aim of the study. Similarly, PCORI’s Program Funding Announcements on Communication and Dissemination Research and Improving Methods for conducting Patient-Centered Outcomes Research includes solicitation of “Studies to develop and compare alternative methods and tools to elicit and include patient-desired outcomes in the healthcare decision-making process” (PCORI, 2013) and “projects to address gaps in methodological research relevant to conducting patient-centered outcomes research (PCOR).” (http://www.pcori.org/funding-opportunities/announcement/improving-methods-conducting-patient-centeredoutcomes-research-3).

    2. II.

      Multiple funders have enabled researchers to include measurement development for key outcomes of a prospective trial to be included as part of study development.

    3. III.

      Finally, funding announcements have developed opportunities for measure development as part of a broader set of activities. The US Veterans Affairs’ Health Services Research and Development program, for example, review implementation science relevant applications and allow measurement development to exist within these applications. Furthermore, the Quality Enhancement Research Initiative (QUERI) mechanisms allow for the addition of research protocols on to an existing quality improvement project that is part of the QUERI program or a partnered evaluation as long as it fits the main program’s impact goal and the needs of the operational partner [14]. The NIH Institutes and Centers have frequently included methods and measurement cores as components of research centers, conference grants (SIRC, for example, began through an NIMH-funded conference grant, 5R13MH086159-05, https://projectreporter.nih.gov/project_info_description.cfm?aid=8645741&icde=26765462&ddparam=&ddvalue=&ddsub=&cr=2&csb=default&cs=ASC) and other infrastructure mechanisms.

Conclusion

Advancing and strengthening measurement approaches for D&I research are critical to building a cumulative scientific knowledge base and offering tools for informing the real-world practice of D&I. A number of existing measurement resources can provide a starting point to researchers and stakeholders for the identification of appropriate measures and harmonization of measurement use across studies. However, additional work needs to take place to advance and strengthen the field. Critical areas for development include the following: additional high quality, pragmatic measures for key D&I-related constructs for which measures do not exist; a core set of brief measures that can be used efficiently across pragmatic clinical trials and practice-based observational studies; and a rapid-cycle measure development process. Researchers and stakeholders are encouraged to take advantage of a number of funding mechanisms supporting this type of work. Implementation of a set of core measures across multiple studies would facilitate future synthesis enabling the examination of the impact of various D&I constructs on clinical and population health outcomes.

Abbreviations

CFIR:

Consolidated Framework for Implementation Research

D&I:

Dissemination & Implementation

EBP:

evidence-based practice

NCI:

National Cancer Institute

NIH:

National Institutes of Health

NIMH:

National Institutes of Mental Health

PCOR:

Patient-Centered Outcomes Research

PCORI:

Patient-Centered Outcomes Research Institute

QOL:

quality of life

QUALRIS:

Qualitative Research in Implementation Science

QUERI:

Quality Enhancement Research Initiative

SIRC:

Society for Implementation Research Collaboration

References

  1. Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implement Sci. 2014;9:118.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, Comtois KA. The society for implementation research collaboration instrument review project: a methodology to promote rigorous evaluation. Implement Sci. 2015;10:2.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:22.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Chor KHB, Wisdom JP, Olin S-CS, Hoagwood KE, Horwitz SM. Measures for predictors of innovation adoption. Adm Policy Ment Health Ment Health Serv Res. 2014;1–29.

  5. Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013;45:237–43.

    Article  PubMed  Google Scholar 

  6. Lewis CC, Weiner BJ, Stanick C, Fischer S: Advancing implementation science through measure development and evaluation: study protocol. Implementation Science. 2015;10:102.

  7. Neta G, Glasgow RE, Carpenter CR, Grimshaw JM, Rabin BA, Fernandez ME, Brownson RC. A framework for enhancing the value of research for dissemination and implementation. Am J Public Health. 2015;105:49–57.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65–76.

    Article  Google Scholar 

  9. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Hill CE, Knox S, Thompson BJ, Williams EN, Hess SA, Ladany N. Consensual qualitative research: an update. J Couns Psychol. 2005;52:196.

    Article  Google Scholar 

  11. Hesse BW. Technology-mediated social participation in health and healthcare. In: Technology mediated social participation workshop. 2010.

    Google Scholar 

  12. Rabin BA, Purcell P, Naveed S, Moser RP, Henton MD, Proctor EK, Brownson RC, Glasgow RE. Advancing the application, quality and harmonization of implementation science measures. Implement Sci. 2012;7:119.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Dissemination & implementation models [http://www.dissemination-implementation.org/]

  14. QUERI strategic plan [http://www.queri.research.va.gov/about/strategic_plans/default.cfm]

  15. Holt DT, Helfrich CD, Hall CG, Weiner BJ. Are you ready? How health professionals can comprehensively conceptualize readiness for change. J Gen Intern Med. 2010;25:50–5.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Weiner B, Amick H, Lee S-Y. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;4:379–436.

    Article  Google Scholar 

  17. What is the clinical-community relationships measurement framework? [http://www.ahrq.gov/professionals/prevention-chronic-care/resources/clinical-community-relationships-measures-atlas/ccrm-atlas3.html]

  18. Rathje, Hermann, and Bernd Hill. https://www.eurocontrol.int/sites/default/files/content/documents/nm/safety/safety-change-and-transitiontools-compendium-main-document-2010.pdf. 1st ed. 2016. Print.

  19. AHRQ.gov [http://primarycaremeasures.ahrq.gov/team-based-care//Home/Framework?TopicId=12]

  20. Birken SA, Presseau J, Ellis SD, Gerstel AA, Mayer DK. Potential determinants of health-care professionals’ use of survivorship care plans: a qualitative study using the theoretical domains framework. Implement Sci. 2014;9.

  21. About us—CIHR [http://www.cihr-irsc.gc.ca/e/29418.html]

Download references

Acknowledgements

Research reported in this publication was also supported by the National Institute of Mental Health of the National Institutes of Health under Award Number R01MH106510. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We would also like to thank Caitlin Dorsey and Abigail Melvin for their diligent work as raters that led to the summarization of measure resources and to W. Chase Cameron for his help with the review of the literature.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Borsika A. Rabin.

Additional information

Competing interests

The authors declare that they have no competing interests.

WEN is on the editorial board for the journal Implementation Science.

Authors’ contributions

BAR and CCL served as equal contribution lead authors for this paper. RCB, JNT, and REG provided guidance on the initial conceptualization of the paper. BAR and CCL operationalzied the concept, integrating feedback from co-authors during development stages of the manuscript. WN contributed to iterative development of the study methods. BAR, CCL, and WN spearheaded an international effort to identify measure resources. DC, GN, and CCL drafted the background of the manuscript. CCL led a team of two research assistants to complete the review, data extraction, and synthesis. CCL wrote a draft of the method and the results component of the “findings” section. BAR conducted a review of the extant literature to identify peer-reviewed resources; she wrote this section of the method. BAR, DC and GN drafted the discussion components of the “findings” section. All authors reviewed, edited, and approved the submitted version of the manuscript.

Additional file

Additional file 1:

Examples of funding mechanisms for the advancement of D&I measurement. (DOCX 21 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rabin, B.A., Lewis, C.C., Norton, W.E. et al. Measurement resources for dissemination and implementation research in health. Implementation Sci 11, 42 (2015). https://doi.org/10.1186/s13012-016-0401-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-016-0401-y

Keywords