Background

Efficient use of health resources benefits both individuals and society — and one way to increase efficiency is to abandon obsolete and ineffective health interventions [1, 2]. De-implementation is typically aimed at reducing the use of low-value care, which has been described as providing little or no benefit, being potentially harmful, and leading to unnecessary costs to patients or wasting health care resources [3]. To achieve this, de-implementation strategies are needed. De-implementation, a process to reduce the use of a medical practice, can occur in four different ways, by removing, replacing, reducing, or restricting the use [4]. Each category has different underlying reasons, and therefore, different solutions may be needed [5]. It is easier to implement new interventions than it is to de-implement existing medical practices [6].

Different terms are used from these withdrawn actions, like de-implementation and disinvestment [7]. Public policy concepts, like disinvestment, are relevant to de-implementation study. Many de-implementation frameworks and models mention costs as a justification for de-implementation [8]. De-implementation has the potential to decrease health care costs [5, 7, 9], and bringing these out requires an evaluation of clinical practices and care pathways. De-implementation can increase health care costs but is still cost cutting to the society. Health technology assessment is one way to assess these changes in clinical practices and care pathways [10]. Differences between health care systems in different countries affect clinical practices and care pathways and costs, which must be taken into account when transferring information from one country to another.

Economic evaluation has been pointed out to be crucial part of implementation research [11,12,13,14,15,16], and costs are identified as a key outcome in implementation research [13, 17]. An economic evaluation can bring out whether using a strategy to improve the quality of health care is a cost-effective use of limited resource [13]. Without knowing the costs of implementation strategies, it is also difficult, or even impossible, to compare different strategies or even implement them [14]. Accordingly, implementation studies should report the relevant costs of an implementation strategy, the sources of costs data, and how costs are calculated. Costs should include all costs from development to execution, such as staff, material, and training costs [12]. However, a previous systematic review showed that the quantity of economic evaluation in the field of implementation research is modest and called for more systematic and comprehensive reporting of costs in implementation research [12]. In economic evaluation of guideline implementation, there are three distinct stages: development of the guidelines, implementation of the guidelines, and treatment effects and costs as a consequence of behavior change. Systematic review of these cost brought out that costs were reported in a quarter of studies (27%), methodological quality was poor, and none of the included studies gave reasonable complete information of costs [18].

The above considerations are equally valid when de-implementation is concerned [19] as de-implementation processes are observed to be difficult and resource-intensive and the actual costs and subsequent savings are not well understood [3, 4]. A study [19] conceptualized the outcomes of de-implementation and recommended a clear distinction between the target of de-implementation and the strategies used. The recommendations included several aspects, such as potential cost savings due to decreased use of the target intervention, the costs of de-implementation strategies, the impacts on health care providers, and time, which should be considered when measuring the costs of de-implementation.

Since the aim of implementation and de-implementation is similar — to improve the quality of health care and effective use of resources — de-implementation strategies need to measure clinically relevant outcomes but also to analyze whether a strategy leads to a change in health care costs. The potential savings in health care costs as well as the costs of de-implementation strategy itself must be taken into account.

The aim of this systematic scoping review was to analyze how de-implementation studies have reported both the costs of de-implementation strategies and the impacts (estimated or measured) of de-implementation on health care costs.

Methods

We used the PRISMA extension for scoping reviews (PRISMA-ScR) [20] to guide the conducting and reporting of this review (Additional file 1). This analysis of de-implementation costs and de-implementation impacts on health care costs was undertaken as part of a systematic scoping review of de-implementation randomized controlled trials (RCTs) [21]. This systematic scoping review was registered with Open Science Framework (OSF ueq32).

Data sources and searches

Literature searches for these economic analyses are drawn from the registered systematic scoping review and are described in detail elsewhere [21]. We searched for de-implementation RCTs in the MEDLINE and Scopus databases up to May 24, 2021, without language or publication date limitations. The search strategy (Additional file 2) was developed in consultation with a medical information specialist (T. L.). We based our search on a previous scoping review identifying de-implementation-related terms [7] and modified it iteratively based on systematic reviews [22, 23]. We searched the reference lists of systematic reviews identified by our search to find additional potentially eligible articles. We also followed up protocols and post hoc analyses and added their main articles to the selection process.

Study inclusion and exclusion criteria

The inclusion and exclusion criteria are described previously [21]. In brief, we included RCTs that aimed to reduce the use of a clinical practice. We included all de-implementation intervention types on any clinical practice and all target groups (patients, health care personnel, organizations, and citizens in general). We excluded articles on de-prescribing trials, because in our opinion the context is different (stopping a treatment already in use vs. not starting a treatment) [24]. We also excluded trials where one medical practice was used to de-implement another medical practice and trials where the reason to de-implementation was to reduce resource use (e.g., financial resources or clinical visits) [21].

Risk of bias

To assess the quality of the included studies, we used a modified Cochrane risk-of-bias tool (RoB2.0) for randomized trials [25]. The process of modification is described in detail elsewhere [21]. This modified tool includes six criteria, judging studies to be at either high or low risk of bias (Additional file 3). The six criteria are as follows: (1) randomization procedure, (2) allocation concealment, (3) blinding of outcome collectors, (4) blinding of data analysts, (5) missing outcome data, and (6) imbalance of baseline characteristics. Four of the researchers conducted the quality assessment independently and in duplicate.

Data collection and extraction strategy

Both independently and in duplicate, we used standardized forms with detailed instructions in identifying eligible articles (titles and abstract and full-text screening) and in data extraction. Disagreements were resolved through discussion and, if necessary, through consultation of a third investigator.

We collected the following data: (1) study characteristics (i.e., author(s), year, country of origin, sample size), (2) types and characteristics of interventions (i.e., intervention strategy, target groups of intervention), (3) characteristics of the practice of interest (i.e., target intervention, medical content area, medical settings), (4) outcomes of the study, (5) intervention efficacy, (6) costs of de-implementation (i.e., total costs, costs per unit), and (7) effect on health care costs (target group, size and direction of effect, and what was measured or estimated). The costs of de-implementation had to be reported in monetary form, and total or per unit cost were specified by the study authors. Data regarding costs of de-implementation and effect on health care costs are reported in this article; other outcomes are reported elsewhere [21].

Data synthesis and analysis

We summarized the characteristics and details of de-implementation strategies and target population(s) and provided an overview on de-implementation costs. We extracted the costs in the reported currency and converted it into USD in 2021 value to facilitate comparability across all included studies. We changed the currency from EUR to USD, because more studies have used USD. We used a modified Effective Practice and Organisation of Care (EPOC) taxonomy [21] to categorize interventions and to analyze possible cost differences between different de-implementation strategies.

Finally, we provided an overview of the impact of de-implementation on health care costs. We reported the direction of the effects and cost allocations. We relied on the authors’ conclusion on the significance of the effect. We analyzed possible between-study differences in influence on health care costs. This was reported in various ways (monetary and qualitative).

We planned to do subgroup analyses based on (i) health care settings, (ii) target of intervention, (iii) health care financing, and (iv) country income groups. We assumed beforehand that the studies would be heterogeneous so a meta-analysis would not provide any added value.

We used summary statistics (i.e., frequencies and proportions) to describe study characteristics. We used nonparametric tests to analyze differences between outcomes of our interest. For statistical analyses, we used IBM® SPSS® version 28.0.1 (IBM Corp., Armonk, NY, USA), and all reported P-values less than 0.05 were considered statistically significant.

Results

Of the 12,815 articles identified in our search, we evaluated 1022 full-text articles. We included 227 RCTs, of which only 50 (22%) reported any costs or impact on health care costs. Figure 1 presents the PRISMA flow diagram, and a list all of included studies is found in Additional file 4.

Fig. 1
figure 1

PRISMA flow diagram

The publication dates of the included articles ranged from 1982 to 2021, where half the articles dated after 2010. Most articles were from North America (n = 18, 36%) and Europe (n = 16, 32%). The majority of the studies (n = 41, 82%) were targeted to one type of professionals, and nine studies (18%) reported several target groups. Around two-thirds of the studies (n = 35, 70%) were conducted in primary care. The trials were aimed at reducing the use of drug treatments (n = 37, 74%), laboratory test (n = 8, 16%), or diagnostic imaging (n = 6, 12%). The studies used 16 different de-implementation strategies. Twenty-six used only one strategy, and twenty-four were multifaceted including two or more strategies. In all studies, the goal was to reduce use of a specific health care intervention. In 14 studies, an additional goal was replacing. The description of the characteristics of the included studies is shown in Table 1, and full characteristics are found in Additional file 5.

Table 1 Characteristics of the de-implementation interventions

Risk of bias

Randomization was adequately generated in all studies. However, allocation concealment was not adequate in 10 studies (20%), 22 studies (44%) had missing data, and 20 (40%) had imbalance in baseline characteristics. Data collectors were blinded in 41 studies (82%) but data analysts in only two studies (4%) (Table 2).

Table 2 Risk-of-bias assessment

De-implementation costs

The total costs of de-implementation intervention were reported in 13 studies (6%). These total costs varied considerably, the median being US $32,300 (range: US $616 to 747,000). Table 3 shows total costs converted to US dollars in 2021 value.

Table 3 De-implementation total costs and costs per unit, converted by the authors to USD 2021 value (on October 25, 2022)

The 13 studies (26%) that reported total costs used ten different de-implementation strategies. The most common strategies were educational materials (n = 9), audit and feedback (n = 7), educational meetings for individuals (n = 4), treatment algorithm (n = 3), educational meetings for groups (n = 3), and developing clinical practice guidelines (n = 2). The strategies used in one study included alerts, local consensus process, educational material for patients, and public intervention. Strategy combinations were diverse; many combined different educational strategies together. When using educational material in de-implementation, the median for total costs median was US $118,000 (range: US $6845 to 747,000). The total costs seemed higher in studies using educational materials than in studies not using such materials when not using it (Mann–Whitney test, p = 0.05). For other strategies, the total costs did not significantly differ between studies using vs. from not using each strategy (Mann–Whitney test, all p > 0.05).

In studies that used only one de-implementation strategy (n = 4, 31%), the median for total costs was US $8090 (range: US $616 to 32,300). In studies using two de-implementation strategies (n = 2, 15%), the median for total costs was US $224,000 (range: US $118,000 to 330,000), and with three or more strategies (n = 7, 54%), the median for total costs was US $43,600 (range US $6845 to 747,000).

Costs per unit were reported in 12 studies (5%). The most common unit was cost per physician, but also costs per health care provider, health care unit, day, and patient were reported (Table 3). In these studies, various de-implementation strategies were used. The most common were educational materials (n = 8), educational meetings for individuals (n = 6), and for groups (n = 5), audit and feedback (n = 5), and educational materials for patients (n = 2). Alerts, treatment algorithms, public intervention, and developing clinical practice guidelines were each used once. There were no differences in costs between using a given strategy and not using it (Mann–Whitney test, all p > 0.05).

Of the articles that reported total costs or costs per unit, 10 out of 18 (56%) offered at least some detailed information on the costs, but only four (22%) reported the exact costs. The most frequently reported types of costs were material costs, payment for trainers, and travel costs. In addition, postage, rent of premises, and loss of working hours were occasionally reported. Cost related to de-implementation intervention planning was rarely brought out. Information on costing methods was not mentioned in the articles. None of the articles separated costs related to the phases of de-implementation.

A meta-analysis was not possible due to the heterogeneity of the studies (e.g., the type and number of de-implementation strategies used). There were few studies in the pre-specified subgroups, so subgroup analyses were not appropriate.

Impact on health care costs

The impact on health care costs was reported in 43 studies (19%). In most cases, the reports did not specify to whom the impact was targeted (n = 25, 58%). In four studies (9%), the impact was on patients’ own costs, whereas in 14 studies (33%), it was on health care provider’s costs. In 27 (63%) studies, health care costs decreased, whereas in 14 (33%), there was no change, and in two (5%) studies, the costs increased. The impact was targeted to medicine costs (n = 29, 67%), laboratory test costs (n = 8, 19%), total health care utilization costs (n = 3, 7%), diagnostic testing costs (n = 2, 5%), and radiography costs (n = 2, 5%).

Most of the articles (n = 32, 74%) have based their assessments on calculations on differences in costs between intervention and control group. In eight studies (19%), the authors have expanded the intervention group costs changes to large groups or for longer time. In one study [29], the authors have performed cost-effectiveness analyses, and in two studies, costs that were reported were costs changes during intervention period.

The two studies [30, 31] with increased costs had minor increases in costs allocated to patients. When the impact was allocated to the health care unit, the de-implementation either decreased costs (n = 12, 86%) or had no effect on costs (n = 2, 14%). In six studies, the authors estimated the impact on health care costs. De-implementation influenced laboratory test costs (n = 6), medicine costs (n = 5), diagnostic testing costs (n = 2), radiology costs (n = 2), and total health care expenditure per visit (n = 1). Table 4 shows the direction and size of the impact. The size of the impact was reported in different ways (Table 4).

Table 4 De-implementation impact on health care costs per allocated health care unit

Of the 25 studies, which did not detail allocation of the impact, 14 (56%) reported a decrease in costs, whereas 11 (44%) reported no change (Table 5). The impact was calculated in twelve studies (48%) and estimated in five studies (20%). In the rest of studies, it was not possible to assess from the report whether the impact was calculated or estimated. In most of the studies (n = 20, 80%), the de-implementation mainly influenced the costs of medicine and laboratory tests. The change in reported health care cost varied between US $12.6 per patient to US $80.4 million per country. Of these 25 studies, 15 reported the impact in a monetary measure using different currencies (Table 5).

Table 5 De-implementation impact on health care costs in studies, not specifying to whom change was allocated

The 43 studies that reported impact on health care costs used 15 different de-implementation strategies. The most common strategies were educational meetings for groups (n = 14, 33%), educational materials (n = 13, 30%), audit and feedback (n = 8, 19%), educational meetings for individuals (n = 6, 14%), treatment algorithm (n = 5, 12%), educational materials for patients (n = 5, 12%), and developing clinical practice guidelines (n = 3, 7%). Two studies used public release of performance data and patient-mediated interventions. The strategies used in one study included financial incentives for health care workers, local consensus process, local opinion leaders, managerial supervision, and routine patients-reported outcome measures.

Total costs of de-implementation and the impact on health care costs were reported in seven articles (14%), while unit costs and impact on health care costs were reported in five (10%) articles (Table 6). The articles by Zwar et al. [27] and Butler et al. [26] reported both total and unit costs, and the unit costs were in the same unit as the impact was reported. In two studies [27, 28], the intervention unit costs were less than their impact on health care costs. In the study by Butler et al. [26], the authors commented that their study decreased health care costs, but the intervention costs exceeded the savings.

Table 6 De-implementation costs and impact on health care costs in USD (converted by authors in October 2022)

Discussion

Even though de-implementation is often justified by emphasizing control of health care costs [5, 7, 32], our findings indicate that intervention costs or impact on health care costs was rarely reported in randomized trials of de-implementation. Even when costs were reported, the information on intervention costs or impact on health care costs was minimal. Costs related to data collection and analysis or de-implementation interventions planning were rarely brought out. We also found that methods for reporting intervention costs and impact on health care costs were heterogeneous, which obscures the relationship between costs and impact. Our results are similar to a previous systematic review in implementation research [13] that found aspects that were not adequately covered, such as project management costs, time costs for clinical time, and monitoring costs. A systematic review [18] on economic evaluations and cost analysis in guideline implementation found similar limitations in trial reporting. In all of the studies, the quality of cost information was limited, and only 27% of 235 studies reported any information on costs [18].

For economic evaluation, information on resource use, costs, time horizons, health outcomes, or the consequences of interventions are necessary [33]. Incomplete cost information on de-implementation interventions does not allow economic evaluation or, at worst, may lead to distorted conclusions. The lack of cost information has been identified as a barrier to implementation [12, 14]. De-implementation requires sufficient financial, technical, and human resources [34]. The lack of cost information makes it impossible to evaluate the costs in a systematic way or to basing decision-making on this information. Knowledge-based decisions become possible only when intervention costs and impact on health care costs are both known.

To improve the utilization of de-implementation research, economic evaluation should be planned along with the research. Subsequently, the studies should report precise monetary costs of de-implementation strategies and their impact on health care costs. When reporting costs, general considerations should be taken into account: (i) give detailed and reasoned values, (ii) separate included costs, (iii) provide the time horizon when the costs are applicable, and (iv) break down the planning and acting phase costs of the de-implementation process.

Strengths and limitations

Our highly sensitive literature search used a wide variety of de-implementation terms. However, due to heterogeneous indexing of de-implementation studies, it is possible that we may have missed relevant articles.

A strength of our article is that we searched for cost information and de-implementation impact information from the full text of articles, which noticeably increased the number of included articles — as the impact on health care costs tended to be reported in the full text, not in the abstract.

We restricted our study to RCTs, which may be seen as a limitation. Since many de-implementation projects have likely not included randomized control groups, we missed economic information from these studies. However, the efficacy of interventions should be studied in RCT settings, and thus, we believe that our decision to exclude other study designs is justified.

This review was made alongside with another review, which may have restricted the number of included studies. We excluded studies where one medical practice was used to de-implement another, because these often focus on implementation not on de-implementation. We focused on de-implementation of low-value care, and therefore, excluded articles were the reason for de-implementation which was cutting resource use. Both these restrictions may have excluded some articles where cost information could have been given. However, we believe that our careful selection of articles in the full-text phase has prevented this. Our perspective was to find out how costs are brought out in de-implementation studies, so the search was made on that view. It could be a limitation, and some studies with costs may have been missed. To avoid this, we searched also the references of included studies to find other articles on same studies. Using the approach that Vale has used, it may lead to a different result.

Conclusion

A vast majority of de-implementation trials have failed to report any intervention costs or impacts on health care costs. In studies that do include cost information, typically only nonnumerical information on economics impacts is reported, and direct costs of de-implementation strategies are excluded. To advance the field, researchers should consider economic aspects and include health economists when planning research. De-implementation interventions are often complex and resource intensive, and cost information is essential for effective health policymaking.