Skip to main content

Workload and workflow implications associated with the use of electronic clinical decision support tools used by health professionals in general practice: a scoping review

Abstract

Background

Electronic clinical decision support tools (eCDS) are increasingly available to assist General Practitioners (GP) with the diagnosis and management of a range of health conditions. It is unclear whether the use of eCDS tools has an impact on GP workload. This scoping review aimed to identify the available evidence on the use of eCDS tools by health professionals in general practice in relation to their impact on workload and workflow.

Methods

A scoping review was carried out using the Arksey and O’Malley methodological framework. The search strategy was developed iteratively, with three main aspects: general practice/primary care contexts, risk assessment/decision support tools, and workload-related factors. Three databases were searched in 2019, and updated in 2021, covering articles published since 2009: Medline (Ovid), HMIC (Ovid) and Web of Science (TR). Double screening was completed by two reviewers, and data extracted from included articles were analysed.

Results

The search resulted in 5,594 references, leading to 95 full articles, referring to 87 studies, after screening. Of these, 36 studies were based in the USA, 21 in the UK and 11 in Australia. A further 18 originated from Canada or Europe, with the remaining studies conducted in New Zealand, South Africa and Malaysia. Studies examined the use of eCDS tools and reported some findings related to their impact on workload, including on consultation duration. Most studies were qualitative and exploratory in nature, reporting health professionals’ subjective perceptions of consultation duration as opposed to objectively-measured time spent using tools or consultation durations. Other workload-related findings included impacts on cognitive workload, “workflow” and dialogue with patients, and clinicians’ experience of “alert fatigue”.

Conclusions

The published literature on the impact of eCDS tools in general practice showed that limited efforts have focused on investigating the impact of such tools on workload and workflow. To gain an understanding of this area, further research, including quantitative measurement of consultation durations, would be useful to inform the future design and implementation of eCDS tools.

Peer Review reports

Introduction

UK General Practitioners (GPs) manage a high and rising workload of increasingly complex patient care with many competing demands to attend to within time-limited consultations [1]. This, and ongoing recruitment and retention challenges, has led to a GP workforce ‘crisis’ [2,3,4,5]. The COVID-19 pandemic has introduced further pressures on general practice, with associated back-logs of consultations, diagnoses, and referrals [6,7,8,9]; GP workload therefore continues to be an increasingly pressing issue for health professionals, patients and policy makers.

Clinical decision support (CDS) tools are used by health professionals to assist with clinical decision making in relation to screening, diagnosis and management of a range of health conditions [10,11,12,13,14]. Many CDS tools exist for use in primary care and more recently are being embedded in electronic form (eCDS) within practice IT systems, drawing directly on data within patients’ electronic medical records (EMR) for their operation [11, 15, 16]. Many Clinical Commissioning Groups and Primary Care Networks have supported the introduction of eCDS tools that facilitate diagnosis and expedite referral for certain conditions, such as cancer, particularly since the COVID-19 pandemic [17]. For the purpose of this article, an eCDS tool is defined as any electronic or computerised tool which provides an output pertaining to a possible diagnosis and/or management of a health condition, using patient-specific information.

The workload implications of GPs using eCDS tools during consultations are unclear. One way of examining GP workload is to evaluate the duration of consultations [18], although that is only a single element of GP work, not including time taken for managing referrals, investigations, results, and general administration, undertaking training, and supervising colleagues [19, 20]. The duration of consultations and the ‘flow’ of patients through consulting sessions, however, provide key ways of measuring workload as these have an impact upon GPs’ levels of stress throughout the working day [21,22,23]. Understanding whether using eCDS tools impacts consultation duration and patient ‘flow’ through consulting sessions may help facilitate the implementation of eCDS tools into practice.

Here we aimed to establish if there is pre-existing evidence on potential workload, including impact on consultation durations, associated with the use of eCDS tools by health professionals in general practice and primary care. The objective of this literature review therefore was to identify the available evidence on using eCDS tools and analyse their impact on workload.

Methods

A systematic scoping review was undertaken to identify literature using the stages set out in the Arksey and O’Malley methodological framework, enhanced by more recent recommendations [24, 25]. This method enables examination of the extent, range and nature of research activity with an aim of identifying all existing relevant literature.

A broad research question was used: What is known from the existing literature about the use of eCDS tools by health professionals in general practice/primary care and the associated impact on workload and patient ‘flow’ through consulting sessions?

An initial scoping search was conducted using the databases: MEDLINE (Ovid), HMIC (Ovid) and Web of Science (TR). Keywords from titles and abstracts identified by this search, and index terms used to describe these articles, were identified (see Fig. 1). A second search across the same databases was then undertaken using the identified keywords and index terms, and studies collated for abstract and title screening to identify relevant full-text articles to be reviewed. The searches were conducted in September 2019 and updated in August 2021. The review extensively targeted articles in written English, and published in a ten-year period prior to the initial search date. This time period was selected in order to identify research on eCDS in the context of today’s general practice and primary care, and to manage review in context of available resources. A comprehensive search strategy and set of search terms used is provided in Fig. 1.

Fig. 1
figure 1

Search terms

The review aimed to identify research studies, reports and articles, including literature reviews, investigating the use of eCDS tools by all health professionals in relation to their impact on workload, such as consultation duration. The focus on ‘health professionals’ in primary care, not just on GPs, was intentional – we sought to identify all relevant contextual research. Therefore, studies concerning any type of health condition, eCDS tool, healthcare context within primary care or health professional were eligible. Both quantitative and qualitative evidence were included. Systematic reviews were included as studies in their own right, and thereafter the references of studies included in those reviews were screened for eligibility and relevance. Eligible and relevant references within a systematic review were then included in addition to those primary studies identified by the original searches. Studies relating specifically to the design or development of eCDS tools, and those focussing on clinical factors associated with specific conditions, were excluded. Protocol articles were excluded if the published results article of the same study were available.

Study selection was guided by: (i) an initial team meeting to discuss inclusion and exclusion criteria, (ii) all abstracts and full text articles were independently reviewed by two reviewers, and (iii) team meetings were held throughout the process to discuss and resolve conflicts of agreement. The following key information was gathered from the included studies: author(s), year of publication, study origin, study aims, type of eCDS tool in study, study population/context, methods, and outcome measures. EF, a health services researcher, classified the key findings into categories, defined as consultation duration-related (‘perceived’ or ‘objectively-measured’), or ‘other’ workload-related. The articles were organised using Covidence review software, then collated in a descriptive format using Microsoft Excel, and reviewed to summarise the key findings.

Results

The database search yielded 5694 publications (4007 after removal of duplicates, Fig. 2). After screening titles and abstracts, 211 publications were selected for full-text screening. Of these, 120 were excluded for not meeting the inclusion criteria, resulting in 91 publications being included in the scoping review. Four of these articles were systematic reviews; screening of eligibility and relevance of references included in those reviews led to the inclusion of a further four articles. The total 95 included articles referred to 87 research studies.

Fig. 2
figure 2

Summary of the screening process

Description of included articles

All studies were conducted in high-income countries, with the exception of one from Sub-Saharan Africa. A third of the articles from the studies originated in the USA (36), with UK and Australian articles comprising another third (21 and 11 respectively). A further 18 publications originated in Canada and mainland Europe, with the remaining studies conducted in New Zealand (2), South Africa (2) and Malaysia (1). For most articles workload was not the main focus, with only 16 examining it either as a main focus or as one of the aims.

The most common clinical areas of focus among the eCDS tools studied were cancer risk assessment (15 articles), cardiovascular disease (11), and prescribing for various conditions (10). Other common clinical areas included: blood-borne viruses (3 articles), and various other long-term conditions (14 articles, including those on diabetes, chronic kidney disease, asthma, Chronic Obstructive Pulmonary Disease, and hypertension). Smaller numbers focussed on tools for other conditions including: transient ischaemic attack and stroke, abdominal aortic aneurysm, respiratory infections, psychiatric disorders, skin conditions, hearing loss, and familial conditions (one or two publications on each). Some tools were also designed to support general delivery of care across a range of domains such as maternal and child health, occupational health, behavioural health, and geriatric home care.

A third of articles (31) utilised purely qualitative methods, almost all of which included interviews and/or focus groups with health professionals. One exception reported conversation analysis of audio- and video-recorded consultations and another study reported observations of consultations. Twenty-eight articles reported quantitative methods; 23 involved a survey of health professionals and/or analysis of EMR data or usage data from the investigated tool. The other quantitative articles included three randomised controlled trials and two observational studies. The remaining 28 articles utilised mixed methodologies. The majority of these involved either a survey of health professionals plus qualitative interviews/focus groups (n = 12) or an analysis of EMR/tool usage data in addition to qualitative interviews, focus groups and/or observations (n = 15). Four further articles were systematic reviews, two involving qualitative synthesis and one being a mixed-methods narrative review. All included articles are summarised in the data extraction table (Table 1).

Table 1 Data extraction table

Workload-related findings

The scoping review had the broad aim of identifying evidence regarding impacts on workload and workflow; evidence most frequently reported these issues in terms of time and consultation durations. Findings from articles relating to perceived and objectively-measured impacts on either the time spent interacting with an eCDS tool or on whole consultation durations are summarised first (also in Table 2). Findings from articles that reported other workload-relevant results are summarised after.

Table 2 Summary of key findings from qualitative and quantitative evidence

Perceived impacts on consultation duration

Seventy-two articles described impacts on consultation duration. These were gathered from qualitative interviews or focus groups with health professionals, often with the aim of identifying barriers and facilitators to implementing eCDS tools in practice. In spite of the wide range of contexts and functionalities of eCDS tools encompassed within this review, the majority of articles indicated that using an eCDS tool was thought to be associated with an increase in consultation duration (n = 36). Some showed a mix of views among health professionals (n = 20). Six articles reported an overall impression that an eCDS tool reduced or ‘saved’ time within the consultation. The remaining articles either indicated no perceived impact on consultation duration (n = 4) or made no explicit conclusion (n = 7).

Perceived increase in consultation duration

Among the 36 articles that indicated a perceived increase in consultation duration, the most commonly highlighted concerns related to existing time pressures and lack of time during a consultation for clinicians to interact with eCDS tools and/or to carry out resultant recommended actions [16, 27, 28, 30, 36, 41, 47, 76, 93, 96, 97, 100, 102, 110, 111, 115, 123]. A prevalent view was that workload was ‘already heavy’ and that using eCDS tools would inevitably add burden [31, 44, 49, 60, 70, 78, 89, 102, 111, 115, 123]. In the case of one tool to support delivery of preventive care through review of patients’ lifestyle factors, the sense of lack of time for preventive care in general drove the view towards the tool increasing consultation duration [31]. Hirsch et al. (2012), however, highlighted that even though the majority of physicians in their study subjectively appraised consultation duration as being extended (85%), there were more of these physicians who felt that the time extension was ‘acceptable’ than those who judged it to be ‘unacceptable’ [52].

The usual flow of tasks to complete during a consultation (often referred to as ‘workflow’ [29, 33, 35, 39, 40, 42, 47, 58, 66, 67, 72, 74, 75, 84, 86, 91]) was commonly expected to be disrupted by eCDS tools, causing an increase in consultation duration [30, 34, 35, 47, 83, 93, 102, 104]. Specific time-consuming functions of tools, such as reading text, additional data-entry and using tools which were stand-alone from the EMR [16, 34, 70, 92, 102, 107, 117, 123], as well as perceptions of poor- or slow-functioning software [35, 37, 60, 104] were also highlighted. A potential for negative impact of eCDS tools on the trajectory of the conversation with patients was expressed by some health professionals. Some expressed concerns that introducing unexpected discussion, such as addressing the risk of cancer, would overtake the allotted consultation time and cause clinics to run late [15, 33, 53, 92].

Among these 36 articles, a wide range of eCDS tools with varying features and functionality were described (some overlapping). Thirteen involved tools which could interrupt the consultation, by presenting an on-screen alert containing risk or safety information, triggered by opening the EMR or by inputting diagnosis or prescription details [27, 28, 36, 53, 59, 78, 83, 89, 92, 93, 100, 104, 117, 131]. In addition, ten of these articles specifically highlight the issue of the tool directing the clinician’s attention towards a condition or matter that was not the reason for the encounter [27, 28, 33, 36, 53, 60, 63, 83, 100, 102, 104]. This was seen as necessitating additional time and/or workload, as a result of requiring prolonged discussion with the patient, serving as a distraction, and adding more tasks to already busy consultations. An eCDS tool flagging an issue that did not match the reason for the encounter could be unhelpful if seen as an ‘unwelcome intrusion’ [105], or if undermining a clinician’s professional expertise (particularly if there are doubts regarding the tool’s accuracy [51]) [105]. Such perceptions would be barriers to using or responding to such tools [51, 53, 60, 104]. Arranging a follow-up consultation in order to allow time for additional discussion and tasks was cited as an option for overcoming such barriers [27, 33].

Thirteen articles presented non-interruptive eCDS tools, accessed by a clinician at any time, used to obtain information, decision support or risk calculation, either for individual patients or as an audit tool used across the practice population [15, 30,31,32,33,34,35, 37, 41, 44, 49, 97, 123]. Eight articles described systems that were standalone from the EMR such as web-based eCDS tools [15, 16, 31, 32, 34, 37, 44, 49].

Perceived decrease in consultation duration

The six articles that reported a perceived decrease in consultation duration suggested explanations which included reduced need for data entry [62, 81, 98], synchronisation with the usual workflow of decision-making [118] and saving time when discussing risk management of specific conditions during the consultation [67, 68]. In terms of the purpose, feature and functionality of the studied eCDS tools, the articles referred mainly to tools that were seen to improve efficiency, four of which featured a tool designed to support clinicians in the management of conditions, rather than on their diagnosis. All of the tools described were either embedded within the EMR system or linked/interacted with the EMR in some way. Two included an interruptive component among other functions [67, 68] and two were entirely user-accessed [62, 81].

No perceived impact on consultation duration

No specific causal factors were suggested by the articles that reported an overall perception of no impact on consultation duration. One study of a cardiovascular risk assessment tool highlighted that consultation duration was perceived to be increased in cases where the GP did not expect the patient’s risk to be high, however the number of such instances was low [63]. A study involving both a survey and interviews with US physicians about a family history data collection tool showed that none reported an adverse impact on their workflow [127]. In terms of the studied eCDS tools’ purpose, features and functionality, the tools described included one with an interruptive component (cardiovascular risk score alert [63]) and two that were non-interruptive: a tool pre-populated by clinic staff that generated an email to the physician one week ahead of a patient’s visit to prioritise Chronic Kidney Disease care [42], and a computerised Family Health History CDS tool which included risk stratification [127].

Objectively-measured impacts on consultation duration

Twenty-six articles reported an objective measure of time. These included: (i) time spent using or interacting with an eCDS tool (ranging from three seconds [73] to between 0.5–13 min [35, 50, 54, 84, 90, 116, 120, 121]) and/or (ii) consultation duration [30, 38, 40, 45, 48, 57, 73, 79, 95, 103, 108, 109, 113, 114, 119, 122], including one which measured time from triage to final disposition decision [128].

Increase in consultation duration

Overall, three articles suggested that consultation duration increased, although none measured consultation duration directly. Two of these articles reported that the time taken to use the eCDS tool was ‘too long’ for a typical ten-minute consultation (four minutes [50] and 13 min [54]), implying that consultation durations would increase as a consequence. One of these two articles highlighted the low rates of usage of the eCDS tool as an important consideration alongside the authors’ conclusion [49]. The third study also did not directly measure time, but instead reported ‘visit type’ as a proxy measure of consultation duration; clinicians more often used the eCDS tool in the longer, annual medical review visits (usually allotted 40 min in that study) than in the shorter, acute care visits [38].

No particular purpose, features, or functionality were shared by the eCDS tools described in these articles. In addition, none were highlighted as potential explanatory factors for the concluded increase in consultation duration.

Decrease in consultation duration

Four articles suggested that consultation duration decreased, noting that the eCDS tools helped clinicians to undertake specific tasks more quickly. Two found that calculating cardiovascular risk scores and making clinical decisions, when assisted by an eCDS tool, was faster [116, 120], and another found a 7.3-min reduction in time within an asthma chart review consultation [121]. The fourth reported consultations to be 3.41 min shorter on average when using an eCDS tool to support diagnosis and management of hypertension [122]. All of the tools featured in these articles supported clinicians in the management of long-term conditions by design, or included an element of management support, as opposed to solely supporting initial risk assessment and/or diagnosis. All bar one described tools that were embedded with the EMR system, with only one of these having an interruptive component [120].

No impact on consultation duration

Nine articles concluded that eCDS tools neither extended nor saved time in consultations. Having compared an intervention and control group or a set of baseline and intervention consultations, five articles reported no significant difference in consultation duration [40, 103, 108, 109, 128]. Lafata et al. (2016) found no association between use of a range of eCDS tools with the consultation duration. [57] The remaining articles reported that their measure of duration when using various eCDS tools (9.05 min [48] and 10 min [95]) was ‘similar’ in length to a standard consultation, concluding that the tools did not prolong consultations [119]. The remaining articles did not make any stated conclusion regarding duration or the conclusion was unclear [79, 84, 90, 114, 130].

A common explanation for lack of impact on consultation duration, or where perceptions of such impacts were mixed, was low rates of tool usage by clinicians in studies. Suggested reasons for non-use included perceived or actual difficulties in the tool’s functionality, slow-functioning software [30, 35, 37, 61], disruption to the usual workflow in a consultation [30, 83, 93] or requiring additional data entry to what would normally be inputted to the EMR, particularly where eCDS tools operated as a standalone system [34, 71].

In terms of purpose, features, and functionality of the tools described by these articles, while one article discussed only a stand-alone system from the EMR [95], the other articles reported either a tool embedded in the EMR system or described a range of both embedded and stand-alone systems. None of the described tools had an interruptive component. Most were guiding or supporting either prescribing tasks or decision making during consultations with a focus on patient management.

Conflict between perceived and objectively-measured impacts on consultation duration

Seven articles reported both perceived and objectively-measured impacts on consultation duration of using eCDS tools. Two found that both their perceived and objective measures suggested increased duration [50, 114]. However, five indicated a conflict between the perceived and objectively-measured impacts [30, 35, 45, 73, 108]. The common perception was that consultation duration was (or would be) increased, but there was actually no measurable difference in duration found. All of the tools described by these five articles were embedded with the EMR system, and did not include an interruptive alert feature or pertain to conditions or tasks likely to be irrelevant to the consultation.

Trafton et al. (2010) described physicians’ perceptions that eCDS for prescribing opioid therapy was ‘too time-consuming’, with insufficient time available during a 15-min consultation to use it [73]. However, the measured time spent using the tool ranged from 3 s to 10 min, and the study concluded that clinicians had ‘a reasonable amount of time’ to use the system. Curry & Reed (2011) reported that physicians felt the time taken for an eCDS system to interact with the EMR was ‘too slow’ despite the captured duration for this interaction being less than one second, although it is unclear whether this reflects physicians’ views of the overall interaction time rather than data processing time specifically [35]. Bauer et al. (2013) reported that although primary care clinic staff felt that a paediatric visit eCDS system slowed down clinics, an “informal” time study did not show any significant delays [30].

Porat et al. (2017) reported that 13 GPs (38%) felt their consultations took longer when using an eCDS system. They felt that inputting free text into the EMR instead was faster, and these same GPs did indeed have longer consultations when using the tool (an average of 15.45 min compared with their baseline 13.53 min average consultations). However, this was the case only for the GPs who expressed concern about time, and not for the GP sample as a whole where no significant difference in consultation duration was observed.

Further, a study by Gregory et al. (2017) found that the perception of physicians regarding the time available to manage eCDS alerts (termed ‘subjective workload’) was not correlated with actual hours spent managing alerts based on physicians’ self-report (‘objective workload’) [46]. When the authors examined whether these ‘subjective’ or ‘objective’ workload measures predicted physician burnout, only the ‘subjective’ measure was predictive. This suggests that the perception of eCDS alert burden in the context of existing high workload is more problematic than the measure of actual time spent managing alerts.

Methods utilised to measure consultation duration

A range of methods was utilised to measure objectively consultation duration or the time spent using an eCDS tool. In five articles, clinicians provided a self-report of time spent, using either a paper or electronic case report form [45, 95, 114, 121, 122]. A member of the research team manually timed the duration of study consultations or scenarios in four articles. [40, 54, 57, 73] Five articles reported time data captured electronically from log files within the eCDS tool itself, including clinician time spent using particular elements of the tool or completing certain activities [35, 50, 73, 84, 90]. Three articles described using specialist software, operating in the background, designed to record users’ interactions with the eCDS tool during consultations [116, 119, 120]. Specific software included Morae Recorder and Camtasia, both TechSmith Corporation products. Three studies used video- or audio-recordings to capture consultation durations in addition to other elements of the consultation they aimed to observe [48, 80, 113]. Two articles that referred to the same core UK study, described capturing duration data from the practice IT system (Vision), based on the opening and closing of the EMR [108, 109]. One USA study estimated consultation duration based on the reasons patients were attending – either for a ‘shorter’ visit, such as for acute care or follow-up, or for a ‘longer’ visit, such as for a general medical examination [38], and two articles provided insufficient details of the methods used [30, 117].

Other workload-related findings

Twenty-seven articles included additional workload-related findings. Twenty-three of these reported the impact on ‘workflow’, regarding how eCDS tools altered the usual order in which patient-related tasks were carried out [33, 35, 39, 40, 47, 58, 66, 74, 75, 83, 84, 87, 91, 93, 94, 100, 102,103,104, 111, 113, 118, 119, 127]. Five referred to the impact of using eCDS tools on the trajectory of dialogue with patients, to the extent that follow-up appointments were arranged to avoid consultations running late [15, 39, 75, 94, 100]. One of these mentioned clinicians’ concerns about ‘taking time away’ from other waiting patients, expressed as a barrier to the implementation of eCDS systems [26]. Many of the tools in these articles were clearly described as having an interruptive alert component [33, 58, 83, 84, 86, 91, 93, 100, 104, 111, 118, 119].

Some articles (n = 10) mentioned ‘alert fatigue’ indicating that eCDS tools designed to support health professionals can increase the number of on-screen alerts, leading to a high chance of them being missed or ignored [15, 36, 40, 42, 51, 74, 100, 105, 111, 117]. None of these articles reported a decrease in consultation duration.

Cognitive workload was referred to in three articles. Qualitative interview data suggested that clinicians felt an eCDS tool for prescribing tuberculosis preventive therapy decreased their cognitive workload during consultations. [98] This was perceived as advantageous as it reduced the amount of time spent documenting medications and their contraindications. However, in two articles, eCDS tools were noted to increase cognitive workload. A systematic review that examined factors influencing the appropriateness of interruptive alerts found such alerts increased cognitive weariness, and that an ‘overload’ of alerts increased mental workload [117]. A study of an eCDS tool for assessing cardiovascular risk also highlighted clinicians’ concerns about the cognitive burden of changing to a new way of calculating risk compared with the conventional method they had used until that point [124].

One study reported workload expressed as the number of follow-up consultations needed. This study examined eCDS tools for patients with upper respiratory tract infections, and found no significant difference in the proportion of follow-ups needed between the intervention and control arms [82].

Discussion

This scoping review identified 95 articles that examined the use of eCDS tools by health professionals in primary care and reported findings that included impacts on workload and workflow. While the scoping review had the broad aim of identifying evidence regarding these issues, they were most frequently reported in terms of time and consultation durations.. A large proportion of the research was qualitative and exploratory in nature. The majority of articles reported health professionals’ subjective perceptions of time spent using eCDS tools and/or the impact on consultation duration and there was a smaller evidence base which objectively-measured impact of using eCDS tools on workload, specifically in relation to consultation duration and the flow of consulting sessions.

The reviewed literature reflected that although a small number of articles suggested that using certain types of eCDS tool decreased consultation duration, a strong perception exists among health professionals that consultation duration was increased when eCDS tools were used. It is worth noting that eCDS tools designed to support management of health conditions and tools supporting diagnosis and associated risk assessment may have different impacts on consultation workload and duration; the small number of reviewed articles that indicated a time saving mostly featured tools designed to support patient management. It is also notable that many of the articles describing tools that introduced a condition or issue that was outside of the patient’s or clinician’s agenda for the consultation, frequently reported clinicians’ perceptions that workload and/or consultation duration increased.

The perception that consultation duration was increased is not necessarily backed by studies that objectively measured actual durations of consultations. Although many of the quantitative articles reported the time taken to use various eCDS tools within consultations, fewer studies captured the duration of entire consultations and/or made a comparison between an intervention and non-intervention group. Interestingly, those that did showed no significant difference in consultation duration when using eCDS tools compared with not using them [40, 103, 108, 109, 128]. Various methods were used to capture consultation durations, with no one method that seemed most practical or accurate. For instance, while the manual (stopwatch) timing of consultations by a researcher [54, 73] might arguably capture consultation durations more accurately than clinicians’ self-report, this method could be seen as intrusive to the consultation. Capturing time stamp data in an automated way, for example from EMR systems [108, 109], might address this issue and provide a practical solution, but errors may be introduced by this method if patient records are left open after the end of a consultation, or some part of the consultation takes place when records are closed.

The reviewed literature highlighted that low usage rates of eCDS tools by clinicians in studies (for varying reasons) may be responsible for a lack of observable impact on workload or consultation duration. Conversely, a tool that fits easily within the usual workflow of a consultation might explain the lack of increased duration. The experience of ‘alert fatigue’ was frequently mentioned; a large number of different on-screen alerts during consultations can desensitise clinicians to alerts, and an alert generated by a new tool may be missed or ignored [27, 28, 50]. Ignoring an alert or not utilising an eCDS tool might indicate clinician’s preference to rely on their own clinical judgment, or doubts as to an alert’s accuracy or relevance, which is particularly highlighted within the alert fatigue literature [36, 107, 132,133,134]. It might equally be the case that a clinician did indeed utilise or respond to the eCDS tool, but arranged a follow-up appointment to allow for more time to discuss the clinical issues raised [26, 28, 33], thereby not impacting the duration of the current consultation. Whether use of eCDS tools had an impact on the duration of the healthcare ‘episode’ as a whole (i.e. the index consultation plus the number and duration of any subsequent follow-up consultations) was unclear from the reviewed articles.

Reviewing articles that included both a subjective measure of health professionals’ perceptions and an objective measure of consultation duration provided an opportunity to observe if the perceptions were borne out in reality. These articles most commonly reported that health professionals felt consultations were (or would be) prolonged by using eCDS tools, but objective measures did not consistently back this up [30, 35, 73]. However, the evidence base for actual consultation durations associated with using eCDS tools remains a lot smaller than that of the perceived impacts on consultation durations. One should note that the perception or expectation of health professionals in relation to consultation workload and duration is very important. Firstly, perceptions and expectations may well determine how often eCDS tools are used. Secondly, ‘subjective’ workload (clinicians’ reported amount of time available to manage alerts), rather than ‘objective’ workload (the number of hours actually spent managing alerts), has been found to be predictive of physician burnout [45]. It is worth also noting, however, that a perception or an objective measure of increased workload or duration may not always be viewed negatively; for example, it may not matter how much consultation duration is increased (if it is) if diagnosis and/or management is improved [52].

Strengths and limitations

This study benefits from undertaking a comprehensive literature review addressing a key area of primary care service provision, namely the interface between technologically enhanced service provision in the form of eCDS, and clinical workload and workflow. We successfully identified and summarised a large number of articles published from a variety of international settings.

The review may have been affected by the inclusion of names of specific eCDS tools within the search terms. This reflects research team members’ awareness of existing systems in UK primary care; tools not known to the authors may have been missed from the review. We identified a number of studies through systematic reviews that were not found through our initial searches, this suggests that our initial searches may have missed some relevant work. Inclusion of articles published in the last ten years, since 2009, may also have omitted potentially-relevant research on eCDS since its inception in the 1960’s, however we aimed to identify evidence from research articles based in modern-day primary care settings In addition, although the vast majority of international scientific literature is currently published in English, our exclusion of foreign language articles may have prevented fuller coverage of non-UK primary care contexts with different standards of consultation lengths, workload or workforce challenges, and policy expectations. The review also included a large number of qualitative articles, but time and resource issues prevented a full qualitative synthesis of these articles.

The two independent reviewers who undertook screening were not always the same two reviewers due to resource constraints, however EF undertook all stages of the review and had regular discussions with the small group of four ‘second’ reviewers. Only EF undertook data extraction and so details from included articles may have been affected by selection bias.

Conclusion

This scoping review identified over 90 articles that explored the use of eCDS tools in primary care by health professionals in relation to aspects of workload, including consultation duration. Whilst the qualitative literature showed a strong perception among health professionals that eCDS tools increased workload and consultation duration, a smaller number of studies captured quantitative measures, which neither disputed nor supported this view.

eCDS tools designed to support GPs will continue to be introduced within primary care with the aim of assisting clinicians to diagnose and manage patients effectively. Despite the absence of strong objective evidence that using eCDS tools necessarily leads to increased (or decreased) consultation durations, the perceptions of additional time being taken within consultations, additional workload being generated, and workflow being disrupted, are barriers to implementation and routine use of eCDS tools, irrespective of their potential benefit in the diagnosis or management of patients.

Further quantitative evidence measuring actual consultation duration and GP workload is needed to confirm whether the reported concerns are justifiable, particularly in the time-constrained setting of primary care. Future efforts to implement potentially valuable eCDS tools need to take account of the context of increasing GP workload, workforce shortages and associated pressures, and the ongoing challenges generated in the wake of COVID-19.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

Abbreviations

CDS:

Clinical decision support

eCDS:

Electronic clinical decision support

EMR:

Electronic medical record

GP:

General practitioner

References

  1. Hobbs FDR, Bankhead C, Mukhtar T, Stevens S, Perera-Salazar R, Holt T, et al. Clinical workload in UK primary care: a retrospective analysis of 100 million consultations in England, 2007–14. Lancet. 2016;387:2323–30. https://doi.org/10.1016/S0140-6736(16)00620-6.

    Article  Google Scholar 

  2. Baird BCA, Honeyman M, Maguire D, Das P. Understanding the Pressures in General Practice. London: The King’s Fund; 2016.

    Google Scholar 

  3. Roland M, Everington S. Tackling the crisis in general practice. BMJ. 2016;352:i942. https://doi.org/10.1136/bmj.i942.

    Article  Google Scholar 

  4. Walker B, Moss C, Gibson J, Sutton M, Spooner S, Checkland K. Tenth National GP Worklife Survey. Manchester: Policy Research Unit in Commissioning and the Healthcare System Manchester Centre for Health Economics; 2019.

  5. Sansom A, Terry R, Fletcher E, Salisbury C, Long L, Richards SH, et al. Why do GPs leave direct patient care and what might help to retain them? A qualitative study of GPs in South West England. BMJ Open. 2018;8:e019849. https://doi.org/10.1136/bmjopen-2017-019849.

    Article  Google Scholar 

  6. Practitioners RCoG. General Practice in the post COVID world: challenges and opportunities for general practice. London: RCGP; 2021.

    Google Scholar 

  7. Mughal F, Mallen CD, McKee M. The impact of COVID-19 on primary care in Europe. The Lancet Regional Health – Europe 2021;6. http://dx.doi.org/https://doi.org/10.1016/j.lanepe.2021.100152

  8. Murphy M, Scott LJ, Salisbury C, Turner A, Scott A, Denholm R, et al. Implementation of remote consulting in UK primary care following the COVID-19 pandemic: a mixed-methods longitudinal study. Br J Gen Pract. 2021;71:e166–77. https://doi.org/10.3399/BJGP.2020.0948.

    Article  Google Scholar 

  9. Mann C, Turner A, Salisbury C. The impact of remote consultations on personalised care: Evidence briefing. Bristol: Centre for Academic Primary Care, University of Bristol; 2021.

    Google Scholar 

  10. Hamilton W, Green T, Martins T, Elliott K, Rubin G, Macleod U. Evaluation of risk assessment tools for suspected cancer in general practice: a cohort study. Br J Gen Pract. 2013;63:e30–6. https://doi.org/10.3399/bjgp13X660751.

    Article  Google Scholar 

  11. Price S, Spencer A, Medina-Lara A, Hamilton W. Availability and use of cancer decision-support tools: a cross-sectional survey of UK primary care. Br J Gen Pract. 2019;69:e437–43. https://doi.org/10.3399/bjgp19X703745.

    Article  Google Scholar 

  12. Usher-Smith J, Emery J, Hamilton W, Griffin SJ, Walter FM. Risk prediction tools for cancer in primary care. Br J Cancer. 2015;113:1645–50. https://doi.org/10.1038/bjc.2015.409.

    Article  Google Scholar 

  13. Alssema M, Newson RS, Bakker SJ, Stehouwer CD, Heymans MW, Nijpels G, et al. One risk assessment tool for cardiovascular disease, type 2 diabetes, and chronic kidney disease. Diabetes Care. 2012;35:741–8. https://doi.org/10.2337/dc11-1417.

    Article  Google Scholar 

  14. Buijsse B, Simmons RK, Griffin SJ, Schulze MB. Risk assessment tools for identifying individuals at risk of developing type 2 diabetes. Epidemiol Rev. 2011;33:46–62. https://doi.org/10.1093/epirev/mxq019.

    Article  Google Scholar 

  15. Chiang PP, Glance D, Walker J, Walter FM, Emery JD. Implementing a QCancer risk tool into general practice consultations: an exploratory study using simulated consultations with Australian general practitioners. Br J Cancer. 2015;112(Suppl 1):S77-83. https://doi.org/10.1038/bjc.2015.46.

    Article  Google Scholar 

  16. Rubin G, Walter FM, Emery J, Hamilton W, Hoare Z, Howse J, et al. Electronic clinical decision support tool for assessing stomach symptoms in primary care (ECASS): a feasibility study. BMJ Open. 2021;11:e041795. https://doi.org/10.1136/bmjopen-2020-041795.

    Article  Google Scholar 

  17. NHS. Network Contract Directed Enhanced Service: Early Cancer Diagnosis Guidance. 2021.

    Google Scholar 

  18. Holt TA, Fletcher E, Warren F, Richards S, Salisbury C, Calitri R, et al. Telephone triage systems in UK general practice: analysis of consultation duration during the index day in a pragmatic randomised controlled trial. Br J Gen Pract. 2016;66:e214–8. https://doi.org/10.3399/bjgp16X684001.

    Article  Google Scholar 

  19. Crosbie B, O’Callaghan ME, O’Flanagan S, Brennan D, Keane G, Behan W. A real-time measurement of general practice workload in the Republic of Ireland: a prospective study. Br J Gen Pract. 2020;70:e489–96. https://doi.org/10.3399/bjgp20X710429.

    Article  Google Scholar 

  20. Sinnott C, Moxey JM, Marjanovic S, Leach B, Hocking L, Ball S, et al. Identifying how GPs spend their time and the obstacles they face: a mixed-methods study. Br J Gen Pract. 2021. https://doi.org/10.3399/BJGP.2021.0357.10.3399/BJGP.2021.0357.

    Article  Google Scholar 

  21. Porter AMW, Howie JGR, Levinson A. Measurement of stress as it affects the work of the general practitioner. Fam Pract. 1985;2(3):136–46.

    Article  CAS  Google Scholar 

  22. JamesMackenzie lecture HJG. Quality of caring–landscapes and curtains. J R Coll Gen Pract. 1986;1987(37):4.

    Google Scholar 

  23. Heaney DJ, Howie JG, Porter AM. Factors influencing waiting times and consultation times in general practice. Br J Gen Pract. 1991;41:315–9.

    CAS  Google Scholar 

  24. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32. https://doi.org/10.1080/1364557032000119616.

    Article  Google Scholar 

  25. Colquhoun HL, Levac D, O’Brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67:1291–4. https://doi.org/10.1016/j.jclinepi.2014.03.013.

    Article  Google Scholar 

  26. Ahmad F, Skinner HA, Stewart DE, Levinson W. Perspectives of family physicians on computer-assisted health-risk assessments. J Med Internet Res. 2010;12:e12. https://doi.org/10.2196/jmir.1260.

    Article  Google Scholar 

  27. Arts DL, Abu-Hanna A, Medlock SK, van Weert HCPM. Effectiveness and usage of a decision support system to improve stroke prevention in general practice: A cluster randomized controlled trial. PLoS ONE. 2017;12:e0170974. https://doi.org/10.1371/journal.pone.0170974.

    Article  CAS  Google Scholar 

  28. Arts DL, Medlock SK, van Weert HCPM, Wyatt JC, Abu-Hanna A. Acceptance and barriers pertaining to a general practice decision support system for multiple clinical conditions: A mixed methods evaluation. PLoS ONE. 2018;13:e0193187. https://doi.org/10.1371/journal.pone.0193187.

    Article  CAS  Google Scholar 

  29. Baron S, Filios MS, Marovich S, Chase D, Ash JS. Recognition of the Relationship Between Patients’ Work and Health: A Qualitative Evaluation of the Need for Clinical Decision Support (CDS) for Worker Health in Five Primary Care Practices. J Occup Environ Med. 2017;59:e245–50. https://doi.org/10.1097/JOM.0000000000001183.

    Article  Google Scholar 

  30. Bauer NS, Carroll AE, Downs SM. Understanding the acceptability of a computer decision support system in pediatric primary care. J Am Med Inform Assoc. 2014;21:146–53.

    Article  Google Scholar 

  31. Carlfjord S, Andersson A, Lindberg M. Experiences of the implementation of a tool for lifestyle intervention in primary health care: a qualitative study among managers and professional groups. BMC Health Serv Res. 2011;11:195. https://doi.org/10.1186/1472-6963-11-195.

    Article  Google Scholar 

  32. Carlfjord S, Lindberg M, Andersson A. Staff perceptions of addressing lifestyle in primary health care: a qualitative evaluation 2 years after the introduction of a lifestyle intervention tool. BMC Fam Pract. 2012;13:99. https://doi.org/10.1186/1471-2296-13-99.

    Article  Google Scholar 

  33. Chiang J, Furler J, Boyle D, Clark M, Manski-Nankervis J-A. Electronic clinical decision support tool for the evaluation of cardiovascular risk in general practice: A pilot study. Aust Fam Physician. 2017;46:764–8.

    Google Scholar 

  34. Crawford F. General practitioners' and nurses' experiences of using computerised decision support in screening for diabetic foot disease: implementing Scottish Clinical Information - Diabetes Care in routine clinical practice. Inform Prim Care. 2011;2:18.

  35. Curry L, Reed MH. Electronic decision support for diagnostic imaging in a primary care setting. J Am Med Inform Assoc. 2011;18:267–70. https://doi.org/10.1136/amiajnl-2011-000049.

    Article  Google Scholar 

  36. Dikomitis L, Green T, Macleod U. Embedding electronic decision-support tools for suspected cancer in primary care: a qualitative study of GPs’ experiences. Prim Health Care Res Dev. 2015;16:548–55.

    Article  Google Scholar 

  37. Duyver C, Van Houdt S, De Lepeleire J, Dory V, Degryse J-M. The perception of the clinical relevance of the MDS-Home Care(C) tool by trainers in general practice in Belgium. Fam Pract. 2010;27:638–43. https://doi.org/10.1093/fampra/cmq055.

    Article  Google Scholar 

  38. Eaton J, Reed D, Angstman KB, Thomas K, North F, Stroebel R, et al. Effect of visit length and a clinical decision support tool on abdominal aortic aneurysm screening rates in a primary care practice. J Eval Clin Pract. 2012;18:593–8. https://doi.org/10.1111/j.1365-2753.2010.01625.x.

    Article  Google Scholar 

  39. Laforest F, Kirkegaard P, Mann B, Edwards A. Genetic cancer risk assessment in general practice: systematic review of tools available, clinician attitudes, and patient outcomes. Br J Gen Pract. 2019;69:e97–105.

    Article  Google Scholar 

  40. Fathima M, Peiris D, Naik-Panvelkar P, Saini B, Armour CL. Effectiveness of computerized clinical decision support systems for asthma and chronic obstructive pulmonary disease in primary care: a systematic review. BMC Pulm Med. 2014;14:189.

    Article  Google Scholar 

  41. Finkelstein J, Wood J, Crew KD, Kukafka R. Introducing a Comprehensive Informatics Framework to Promote Breast Cancer Risk Assessment and Chemoprevention in the Primary Care Setting. AMIA Jt Summits Transl Sci Proc. 2017;2017:58–67.

    Google Scholar 

  42. Fox C, Vassalotti J. Checklists as computer decision support at the point of care: a step forward in the recognition and treatment of CKD by primary care physicians. Clin J Am Soc Nephrol. 2014;9:1505–6.

    Article  Google Scholar 

  43. Gill J, Kucharski K, Turk B, Pan C, Wei W. Ambul Care Manage. 2019;42(2):105–1.

  44. Green T, Martins T, Hamilton W, Rubin G, Elliott K, Macleod U. Exploring GPs’ experiences of using diagnostic tools for cancer: a qualitative study in primary care. Fam Pract. 2015;32:101–5. https://doi.org/10.1093/fampra/cmu081.

    Article  Google Scholar 

  45. Gregory ME, Russo E, Singh H. Electronic Health Record Alert-Related Workload as a Predictor of Burnout in Primary Care Providers. Appl Clin Inform. 2017;8:686–97. https://doi.org/10.4338/ACI-2017-01-RA-0003.

    Article  Google Scholar 

  46. Gregory ME, Russo E, Singh H. Electronic Health Record Alert-Related Workload as a Predictor of Burnout in Primary Care Providers. App Clin Inform. 2017;8:686–97. https://doi.org/10.4338/ACI-2017-01-RA-0003.

    Article  Google Scholar 

  47. Harry ML, Truitt AR, Saman DM, Henzler-Buckingham HA, Allen CI, Walton KM, et al. Barriers and facilitators to implementing cancer prevention clinical decision support in primary care: a qualitative study. BMC Health Serv Res. 2019;19:534. https://doi.org/10.1186/s12913-019-4326-4.

    Article  Google Scholar 

  48. Hayward J, Thomson F, Milne H, Buckingham S, Sheikh A, Fernando B, et al. “Too much, too late”: mixed methods multi-channel video recording study of computerized decision support systems and GP prescribing. J Am Med Inform Assoc. 2013;20:e76-84. https://doi.org/10.1136/amiajnl-2012-001484.

    Article  Google Scholar 

  49. Henderson EJ, Rubin GP. The utility of an online diagnostic decision support system (Isabel) in general practice: a process evaluation. JRSM Short Reports. 2013;4:31.

    Article  Google Scholar 

  50. Henderson EJ, Rubin GP. The utility of an online diagnostic decision support system (Isabel) in general practice: a process evaluation. JRSM Short Rep. 2013;4:31. https://doi.org/10.1177/2042533313476691.

    Article  Google Scholar 

  51. Heselmans A, Aertgeerts B, Donceel P, Geens S, Van de Velde S, Ramaekers D. Family physicians’ perceptions and use of electronic clinical decision support during the first year of implementation. J Med Syst. 2012;36:3677–84. https://doi.org/10.1007/s10916-012-9841-3.

    Article  Google Scholar 

  52. Hirsch O, Keller H, Krones T, Donner-Banzhoff N. arriba-lib: evaluation of an electronic library of decision aids in primary care physicians. BMC Med Inform Decis Mak. 2012;12:48. https://doi.org/10.1186/1472-6947-12-48.

    Article  Google Scholar 

  53. Holt TA, Dalton AR, Kirkpatrick S, Hislop J, Marshall T, Fay M, et al. Barriers to a software reminder system for risk assessment of stroke in atrial fibrillation: a process evaluation of a cluster randomised trial in general practice. Br J Gen Pract. 2018;68:e844–51. https://doi.org/10.3399/bjgp18X699809.

    Article  Google Scholar 

  54. Hoonakker P, Khunlertkit A, Tattersal M, Keevil J. Computer decision support tools in primary care. Work. 2012;41(Suppl 1):4474–8. https://doi.org/10.3233/WOR-2012-0747-4474.

    Article  Google Scholar 

  55. Kortteisto T, Komulainen J, Mäkelä M, kunnamo I, Kaila M. BMC Health Serv Res. 2012;12:349.

  56. Krog M, Nielsen M, Videbæk J, Bro J, Christensen K, Mygind A. BMC Health Serv Res. 2018;18:503.

  57. Lafata JE, Shay LA, Brown R, Street RL. Office-Based Tools and Primary Care Visit Communication, Length, and Preventive Service Delivery. Health Serv Res. 2016;51:728–45. https://doi.org/10.1111/1475-6773.12348.

    Article  Google Scholar 

  58. Litvin CB, Ornstein SM, Andrea MW, Nemeth LS, Nietert PJ. Adoption of a clinical decision support system to promote judicious use of antibiotics for acute respiratory infections in primary care. Int J Med Informatics. 2012;81:521–6. https://doi.org/10.1016/j.ijmedinf.2012.03.002.

    Article  Google Scholar 

  59. Lugtenberg M, Pasveer D, van der Weijden T, Westert GP, Kool RB. Exposure to and experiences with a computerized decision support intervention in primary care: results from a process evaluation. BMC Fam Pract. 2015;16:141.

    Article  Google Scholar 

  60. Lugtenberg M, Weenink JW, van der Weijden T, Westert GP, Kool RB. Implementation of multiple-domain covering computerized decision support systems in primary care: a focus group study on perceived barriers. Bmc Medical Informatics and Decision Making 2015;15. https://doi.org/10.1186/s12911-015-0205-z

  61. Lugtenberg M, Weenink JW, van der Weijden T, Westert GP, Kool RB. Implementation of multiple-domain covering computerized decision support systems in primary care: a focus group study on perceived barriers. BMC Med Inform Decis Mak. 2015;15:82. https://doi.org/10.1186/s12911-015-0205-z.

    Article  Google Scholar 

  62. Pannebakker MM, Mills K, Johnson M, Emery JD, Walter FM. Understanding implementation and usefulness of electronic clinical decision support (eCDS) for melanoma in English primary care: a qualitative investigation. BJGP Open 2019;3:bjgpopen18X101635. http://dx.doi.org/https://doi.org/10.3399/bjgpopen18X101635

  63. Peiris DP, Joshi R, Webster RJ, Groenestein P, Usherwood TP, Heeley E, et al. An electronic clinical decision support tool to assist primary care providers in cardiovascular disease risk management: development and mixed methods evaluation. J Med Internet Res. 2009;11:e51.

    Article  Google Scholar 

  64. Rieckert A, Sommerauer C, Krumeich A, Sönnichsen A. BMC Fam Pract. 2018;19:110.

  65. Rieckert A, Sommerauer C, Krumeich A, Sönnichsen A. J Am Med Inform Assoc. 2019;26(11):1323–32.

  66. Robertson J, Moxey AJ, Newby DA, Gillies MB, Williamson M, Pearson S-A. Electronic information and clinical decision support for prescribing: state of play in Australian general practice. Fam Pract. 2011;28:93–101. https://doi.org/10.1093/fampra/cmq031.

    Article  Google Scholar 

  67. Sperl-Hillen JM, Crain AL, Margolis KL, Ekstrom HL, Appana D, Amundson G, et al. Clinical decision support directed to primary care patients and providers reduces cardiovascular risk: a randomized trial. J Am Med Inform Assoc. 2018;25:1137–46.

    Article  Google Scholar 

  68. Sperl-Hillen JM, Rossom RC, Kharbanda EO, Gold R, Geissal ED, Elliott TE, et al. Priorities Wizard: Multisite Web-Based Primary Care Clinical Decision Support Improved Chronic Care Outcomes with High Use Rates and High Clinician Satisfaction Rates. EGEMS (Washington, DC). 2019;7:9.

    Google Scholar 

  69. Sperl-Hillen JM, Rossom RC, Kharbanda EO, Gold R, Geissal ED, Elliott TE, et al. Priorities Wizard: Multisite Web-Based Primary Care Clinical Decision Support Improved Chronic Care Outcomes with High Use Rates and High Clinician Satisfaction Rates. EGEMS (Wash DC). 2019;7:9. https://doi.org/10.5334/egems.284.

    Article  Google Scholar 

  70. Sukums F, Mensah N, Mpembeni R, Massawe S, Duysburgh E, Williams A, et al. Promising adoption of an electronic clinical decision support system for antenatal and intrapartum care in rural primary healthcare facilities in sub-Saharan Africa: The QUALMAT experience. Int J Med Informatics. 2015;84:647–57. https://doi.org/10.1016/j.ijmedinf.2015.05.002.

    Article  Google Scholar 

  71. Sukums F, Mensah N, Mpembeni R, Massawe S, Duysburgh E, Williams A, et al. Promising adoption of an electronic clinical decision support system for antenatal and intrapartum care in rural primary healthcare facilities in sub-Saharan Africa: The QUALMAT experience. Int J Med Inform. 2015;84:647–57. https://doi.org/10.1016/j.ijmedinf.2015.05.002.

    Article  Google Scholar 

  72. Trafton J, Martins S, Michel M, Lewis E, Wang D, Combs A, et al. Evaluation of the acceptability and usability of a decision support system to encourage safe and effective use of opioid therapy for chronic, noncancer pain by primary care providers. Pain med (Malden, Mass). 2010;11:575–85. https://doi.org/10.1111/j.1526-4637.2010.00818.x.

    Article  Google Scholar 

  73. Trafton J, Martins S, Michel M, Lewis E, Wang D, Combs A, et al. Evaluation of the acceptability and usability of a decision support system to encourage safe and effective use of opioid therapy for chronic, noncancer pain by primary care providers. Pain Med. 2010;11:575–85. https://doi.org/10.1111/j.1526-4637.2010.00818.x.

    Article  Google Scholar 

  74. Trinkley KE, Blakeslee WW, Matlock DD, Kao DP, Van Matre AG, Harrison R, et al. Clinician preferences for computerised clinical decision support for medications in primary care: a focus group study. BMJ Health Care Inform. 2019;26:0. https://doi.org/10.1136/bmjhci-2019-000015.

    Article  Google Scholar 

  75. Voruganti TR, O’Brien MA, Straus SE, McLaughlin JR, Grunfeld E. Primary care physicians’ perspectives on computer-based health risk assessment tools for chronic diseases: a mixed methods study. J Innov Health Inform. 2015;22:333–9.

    Article  Google Scholar 

  76. Walker JG, Bickerstaffe A, Hewabandu N, Maddumarachchi S, Crecrc JGD, Jenkins M, et al. The CRISP colorectal cancer risk prediction tool: an exploratory study using simulated consultations in Australian primary care. Bmc Medical Informatics and Decision Making 2017;17. http://dx.doi.org/ARTN 1310.1186/s12911-017-0407-7

  77. Walker JG, Bickerstaffe A, Hewabandu N, Maddumarachchi S, Dowty JG, Crecrc, et al. The CRISP colorectal cancer risk prediction tool: an exploratory study using simulated consultations in Australian primary care. BMC Med Inform Decis Mak. 2017;17:13. https://doi.org/10.1186/s12911-017-0407-7.

    Article  Google Scholar 

  78. Zazove P, McKee M, Schleicher L, Green L, Kileny P, Rapai M, et al. To act or not to act: responses to electronic health record prompts by family medicine clinicians. J Am Med Inform Assoc. 2017;24:275–80. https://doi.org/10.1093/jamia/ocw178.

    Article  Google Scholar 

  79. Murdoch J, Varley A, Fletcher E, Britten N, Price L, Calitri R, et al. Implementing telephone triage in general practice: a process evaluation of a cluster randomised controlled trial. BMC Fam Pract. 2015;16:47. https://doi.org/10.1186/s12875-015-0263-4.

    Article  Google Scholar 

  80. Murdoch J, Barnes R, Pooler J, Lattimer V, Fletcher E, Campbell JL. The impact of using computer decision-support software in primary care nurse-led telephone triage: interactional dilemmas and conversational consequences. Soc Sci Med. 2015;126:36–47. https://doi.org/10.1016/j.socscimed.2014.12.013.

    Article  Google Scholar 

  81. Jetelina KK, Woodson TT, Gunn R, Muller B, Clark KD, DeVoe JE, et al. Evaluation of an Electronic Health Record (EHR) Tool for Integrated Behavioral Health in Primary Care. J Am Board Fam Med. 2018;31:712–23. https://doi.org/10.3122/jabfm.2018.05.180041.

    Article  Google Scholar 

  82. McGinn TG, McCullagh L, Kannry J, Knaus M, Sofianou A, Wisnivesky JP, et al. Efficacy of an Evidence-Based Clinical Decision Support in Primary Care Practices: A Randomized Clinical Trial. JAMA Intern Med. 2013;173:1584–91. https://doi.org/10.1001/jamainternmed.2013.8980.

    Article  Google Scholar 

  83. Litvin CB, Hyer JM, Ornstein SM. Use of Clinical Decision Support to Improve Primary Care Identification and Management of Chronic Kidney Disease (CKD). J Am Board Fam Med. 2016;29:604–12.

    Article  Google Scholar 

  84. Linder JA, Schnipper JL, Tsurikova R, Yu T, Volk LA, Melnikas AJ, et al. Documentation-based clinical decision support to improve antibiotic prescribing for acute respiratory infections in primary care: a cluster randomised controlled trial. Inform Prim Care. 2009;17:231–40.

    Google Scholar 

  85. Ranta A. Transient ischaemic attack and stroke risk: pilot of a primary care electronic decision support tool. J Prim Health Care. 2013;5(2):138–40. https://doi.org/10.1071/hc13138.

    Article  Google Scholar 

  86. Price M, Davies I, Rusk R, Lesperance M, Weber J. Applying STOPP Guidelines in Primary Care Through Electronic Medical Record Decision Support: Randomized Control Trial Highlighting the Importance of Data Quality. JMIR medic inform. 2017;5:e15. https://doi.org/10.2196/medinform.6226.

    Article  Google Scholar 

  87. Price M, Davies I, Rusk R, Lesperance M, Weber J. Applying STOPP Guidelines in Primary Care Through Electronic Medical Record Decision Support: Randomized Control Trial Highlighting the Importance of Data Quality. JMIR Med Inform. 2017;5:e15. https://doi.org/10.2196/medinform.6226.

    Article  Google Scholar 

  88. Wan Q, Harris MF, Zwar N, Vagholkar S, Campbell T. Prerequisites for implementing cardiovascular absolute risk assessment in general practice: a qualitative study of Australian general practitioners’ and patients’ views. J Eval Clin Pract. 2010;16:580–4. https://doi.org/10.1111/j.1365-2753.2009.01170.x.

    Article  Google Scholar 

  89. Hor CP, O’Donnell JM, Murphy AW, O’Brien T, Kropmans TJB. General practitioners’ attitudes and preparedness towards Clinical Decision Support in e-Prescribing (CDS-eP) adoption in the West of Ireland: a cross sectional study. BMC Med Inform Decis Mak. 2010;10:2. https://doi.org/10.1186/1472-6947-10-2.

    Article  Google Scholar 

  90. Troeung L, Arnold-Reed D, Chan She Ping-Delfos W, Watts GF, Pang J, Lugonja M, et al. A new electronic screening tool for identifying risk of familial hypercholesterolaemia in general practice. Heart (British Cardiac Society). 2016;102:855–61. https://doi.org/10.1136/heartjnl-2015-308824.

    Article  Google Scholar 

  91. Jimbo M, Shultz CG, Nease DE, Fetters MD, Power D, Ruffin MT. Perceived Barriers and Facilitators of Using a Web-Based Interactive Decision Aid for Colorectal Cancer Screening in Community Practice Settings: Findings From Focus Groups With Primary Care Clinicians and Medical Office Staff. Journal of Medical Internet Research 2013;15. http://dx.doi.org/https://doi.org/10.2196/jmir.2914

  92. Akanuwe JNA, Black S, Owen S, Siriwardena AN. Communicating cancer risk in the primary care consultation when using a cancer risk assessment tool: Qualitative study with service users and practitioners. Health Expect. 2020;23:509–18. https://doi.org/10.1111/hex.13016.

    Article  Google Scholar 

  93. Bangash H, Pencille L, Gundelach JH, Makkawy A, Sutton J, Makkawy L, et al. An Implementation Science Framework to Develop a Clinical Decision Support Tool for Familial Hypercholesterolemia. Journal of Personalized Medicine 2020;10. http://dx.doi.org/10.3390/jpm10030067

  94. Bradley PT, Hall N, Maniatopoulos G, Neal RD, Paleri V, Wilkes S. Factors shaping the implementation and use of Clinical Cancer Decision Tools by GPs in primary care: a qualitative framework synthesis. BMJ Open. 2021;11:e043338. https://doi.org/10.1136/bmjopen-2020-043338.

    Article  Google Scholar 

  95. Breitbart EW, Choudhury K, Andersen AD, Bunde H, Breitbart M, Sideri AM, et al. Improved patient satisfaction and diagnostic accuracy in skin diseases with a Visual Clinical Decision Support System-A feasibility study with general practitioners. PLoS ONE. 2020;15:e0235410. https://doi.org/10.1371/journal.pone.0235410.

    Article  CAS  Google Scholar 

  96. Byrne D, O’Connor L, Jennings S, Bennett K, Murphy AW. A Survey of GPs Awareness and Use of Risk Assessment Tools and Cardiovascular Disease Prevention Guidelines. Ir Med J. 2015;108:204–7.

    CAS  Google Scholar 

  97. Shillinglaw B, Viera AJ, Edwards T, Simpson R, Sheridan SL. Use of global coronary heart disease risk assessment in practice: a cross-sectional survey of a sample of U.S. physicians. BMC Health Serv Res. 2012;12:20. https://doi.org/10.1186/1472-6963-12-20.

    Article  Google Scholar 

  98. Caturegli G, Materi J, Lombardo A, Milovanovic M, Yende N, Variava E, et al. Choice architecture-based prescribing tool for TB preventive therapy: a pilot study in South Africa. Public Health Action. 2020;10:118–23. https://doi.org/10.5588/pha.20.0020.

    Article  CAS  Google Scholar 

  99. Chadwick D, Hall C, Rae C, Rayment M, Branch M, Littlewood J, et al. A feasibility study for a clinical decision support system prompting HIV testing. HIV Med. 2017;18:435–9. https://doi.org/10.1111/hiv.12472.

    Article  CAS  Google Scholar 

  100. Chadwick D, Forbes G, Lawrence C, Lorrimer S, van Schaik P. Using an electronic health record alert to prompt blood-borne virus testing in primary care. AIDS 2021;35.

  101. Chadwick D, Forbes G, Lawrence C, Lorrimer S, van Schaik P. Using an electronic health record alert to prompt blood-borne virus testing in primary care. AIDS. 2021;35:1845–50. https://doi.org/10.1097/QAD.0000000000002935.

    Article  Google Scholar 

  102. Chima S, Reece JC, Milley K, Milton S, McIntosh JG, Emery JD. Decision support tools to improve cancer diagnostic decision making in primary care: a systematic review. Br J Gen Pract. 2019;69:e809. https://doi.org/10.3399/bjgp19X706745.

    Article  Google Scholar 

  103. Dobler CC, Sanchez M, Gionfriddo MR, Alvarez-Villalobos NA, Singh Ospina N, Spencer-Bonilla G, et al. Impact of decision aids used during clinical encounters on clinician outcomes and consultation length: a systematic review. BMJ Qual Saf. 2019;28:499–510. https://doi.org/10.1136/bmjqs-2018-008022.

    Article  Google Scholar 

  104. Fiks AG, Zhang P, Localio AR, Khan S, Grundmeier RW, Karavite DJ, et al. Adoption of Electronic Medical Record-Based Decision Support for Otitis Media in Children. Health Serv Res. 2015;50:489–513. https://doi.org/10.1111/1475-6773.12240.

    Article  Google Scholar 

  105. Ford E, Edelman N, Somers L, Shrewsbury D, Lopez Levy M, van Marwijk H, et al. Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inform Decis Mak. 2021;21:193. https://doi.org/10.1186/s12911-021-01557-z.

    Article  Google Scholar 

  106. Henshall C, Marzano L, Smith K, et al. A web-based clinical decision tool to support treatment decision-making in psychiatry: a pilot focus group study with clinicians, patients and carers. BMC Psychiatry. 2017;17:265. https://doi.org/10.1186/s12888-017-1406-z.

    Article  Google Scholar 

  107. Holmström IK, Gustafsson S, Wesström J, Skoglund K. Telephone nurses’ use of a decision support system: An observational study. Nurs Health Sci. 2019;21:501–7. https://doi.org/10.1111/nhs.12632.

    Article  Google Scholar 

  108. Porat T, Delaney B, Kostopoulou O. The impact of a diagnostic decision support system on the consultation: perceptions of GPs and patients. BMC Med Inform Decis Mak. 2017;17:79. https://doi.org/10.1186/s12911-017-0477-6.

    Article  Google Scholar 

  109. Kostopoulou O, Porat T, Corrigan D, Mahmoud S, Delaney BC. Diagnostic accuracy of GPs when using an early-intervention decision support system: a high-fidelity simulation. Br J Gen Pract. 2017;67:e201–8. https://doi.org/10.3399/bjgp16X688417.

    Article  Google Scholar 

  110. Laka M, Milazzo A, Merlin T. Factors That Impact the Adoption of Clinical Decision Support Systems (CDSS) for Antibiotic Management. International Journal of Environmental Research and Public Health 2021;18. http://dx.doi.org/https://doi.org/10.3390/ijerph18041901

  111. Lemke AA, Thompson J, Hulick PJ, Sereika AW, Johnson C, Oshman L, et al. Primary care physician experiences utilizing a family health history tool with electronic health record–integrated clinical decision support: an implementation process assessment. J Community Genet. 2020;11:339–50. https://doi.org/10.1007/s12687-020-00454-8.

    Article  Google Scholar 

  112. Lemke AA, Thompson J, Hulick PJ, Sereika AW, Johnson C, Oshman L, et al. Primary care physician experiences utilizing a family health history tool with electronic health record-integrated clinical decision support: an implementation process assessment. J Community Genet. 2020;11:339–50. https://doi.org/10.1007/s12687-020-00454-8.

    Article  Google Scholar 

  113. Li AC, Kannry JL, Kushniruk A, Chrimes D, McGinn TG, Edonyabo D, et al. Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support. Int J Med Inform. 2012;81:761–72. https://doi.org/10.1016/j.ijmedinf.2012.02.009.

    Article  Google Scholar 

  114. Lo LL, Collins IM, Bressel M, Butow P, Emery J, Keogh L, et al. The iPrevent Online Breast Cancer Risk Assessment and Risk Management Tool: Usability and Acceptability Testing. JMIR Form Res. 2018;2:e24. https://doi.org/10.2196/formative.9935.

    Article  Google Scholar 

  115. Margham T, Symes N, Hull SA. Using the electronic health record to build a culture of practice safety: evaluating the implementation of trigger tools in one general practice. Br J Gen Pract. 2018;68:e279. https://doi.org/10.3399/bjgp18X695489.

    Article  Google Scholar 

  116. North F, Fox S, Chaudhry R. Clinician time used for decision making: a best case workflow study using cardiovascular risk assessments and Ask Mayo Expert algorithmic care process models. BMC Med Inform Decis Mak. 2016;16:96. https://doi.org/10.1186/s12911-016-0334-z.

    Article  Google Scholar 

  117. Olakotan OO, Mohd YM. The appropriateness of clinical decision support systems alerts in supporting clinical workflows: A systematic review. Health Inform J. 2021;27:14604582211007536. https://doi.org/10.1177/14604582211007536.

    Article  Google Scholar 

  118. Richardson S, Mishuris R, O’Connell A, Feldstein D, Hess R, Smith P, et al. “Think aloud” and “Near live” usability testing of two complex clinical decision support tools. Int J Med Inform. 2017;106:1–8. https://doi.org/10.1016/j.ijmedinf.2017.06.003.

    Article  Google Scholar 

  119. Richardson S, Feldstein D, McGinn T, Park LS, Khan S, Hess R, et al. Live Usability Testing of Two Complex Clinical Decision Support Tools: Observational Study. JMIR Hum Factors. 2019;6:e12471. https://doi.org/10.2196/12471.

    Article  Google Scholar 

  120. Scheitel MR, Kessler ME, Shellum JL, Peters SG, Milliner DS, Liu H, et al. Effect of a Novel Clinical Decision Support Tool on the Efficiency and Accuracy of Treatment Recommendations for Cholesterol Management. Appl Clin Inform. 2017;8:124–36. https://doi.org/10.4338/ACI-2016-07-RA-0114.

    Article  Google Scholar 

  121. Seol HY, Shrestha P, Muth JF, Wi CI, Sohn S, Ryu E, et al. Artificial intelligence-assisted clinical decision support for childhood asthma management: A randomized clinical trial. PLoS ONE. 2021;16:e0255261. https://doi.org/10.1371/journal.pone.0255261.

    Article  CAS  Google Scholar 

  122. Siaki LA, Lin V, Marshall R, Highley R. Feasibility of a Clinical Decision Support Tool to Manage Resistant Hypertension: Team-HTN, a Single-arm Pilot Study. Mil Med. 2021;186:e225–33. https://doi.org/10.1093/milmed/usaa255.

    Article  Google Scholar 

  123. Takamine L, Forman J, Damschroder LJ, Youles B, Sussman J. Understanding providers’ attitudes and key concerns toward incorporating CVD risk prediction into clinical practice: a qualitative study. BMC Health Serv Res. 2021;21:561. https://doi.org/10.1186/s12913-021-06540-y.

    Article  Google Scholar 

  124. Takamine L, Forman J, Damschroder LJ, Youles B, Sussman J. Understanding providers’ attitudes and key concerns toward incorporating CVD risk prediction into clinical practice: a qualitative study. BMC Health Serv Res. 2021;21:561. https://doi.org/10.1186/s12913-021-06540-y.

    Article  Google Scholar 

  125. Wan Q, Makeham M, Zwar NA, et al. Qualitative evaluation of a diabetes electronic decision support tool: views of users. BMC Med Inform Decis Mak. 2012;12:61. https://doi.org/10.1186/1472-6947-12-61.

    Article  Google Scholar 

  126. Wright T, Young K, Darragh M, Corter A, Soosay I, Goodyear-Smith F. Perinatal e-screening and clinical decision support: the Maternity Case-finding Help Assessment Tool (MatCHAT). J Prim Health Care. 2020;12(3):265–71. https://doi.org/10.1071/HC20029.

    Article  Google Scholar 

  127. Wu RR, Orlando LA, Himmel TL, Buchanan AH, Powell KP, Hauser ER, et al. Patient and primary care provider experience using a family health history collection, risk stratification, and clinical decision support tool: a type 2 hybrid controlled implementation-effectiveness trial. BMC Fam Pract. 2013;14:111. https://doi.org/10.1186/1471-2296-14-111.

    Article  Google Scholar 

  128. Dexheimer JW, Abramo TJ, Arnold DH, Johnson KB, Shyr Y, Ye F, et al. An asthma management system in a pediatric emergency department. Int J Med Inform. 2013;82:230–8. https://doi.org/10.1016/j.ijmedinf.2012.11.006.

    Article  Google Scholar 

  129. Moffat J, Ironmonger L, Green T. Clinical Decision Support Tool for Cancer (CDS) Project Evaluation Report to the Department of Health. Hull York Medical School; 2014. https://34p2k13bwwzx12bgy13rwq8p-wpengine.netdna-ssl.com/wp-content/uploads/2014/11/CDS-evaluation-report-Executive-summary.pdf.

  130. Murphy DR, Reis B, Sittig DF, Singh H. Notifications received by primary care practitioners in electronic health records: a taxonomy and time analysis. Am J Med. 2012;125(209):e1-7. https://doi.org/10.1016/j.amjmed.2011.07.029.

    Article  Google Scholar 

  131. Lugtenberg M, Westert GP, Pasveer D, van der Weijden T, Kool RB. Evaluating the uptake and effects of the computerized decision support system NHGDoc on quality of primary care: protocol for a large-scale cluster randomized controlled trial. Implement Sci. 2014;9:145.

    Article  Google Scholar 

  132. Carli D, Fahrni G, Bonnabry P, Lovis C. Quality of Decision Support in Computerized Provider Order Entry: Systematic Literature Review. JMIR Med Inform. 2018;6:e3. https://doi.org/10.2196/medinform.7170.

    Article  Google Scholar 

  133. Powers EM, Shiffman RN, Melnick ER, Hickner A, Sharifi M. Efficacy and unintended consequences of hard-stop alerts in electronic health record systems: a systematic review. J Am Med Inform Assoc. 2018;25:1556–66. https://doi.org/10.1093/jamia/ocy112.

    Article  Google Scholar 

  134. Hussain MI, Reynolds TL, Zheng K. Medication safety alert fatigue may be reduced via interaction design and clinical role tailoring: a systematic review. J Am Med Inform Assoc. 2019;26:1141–9. https://doi.org/10.1093/jamia/ocz095.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Sophie Robinson who helped design the database searches.

Funding

This scoping review forms part of a PhD, funded within the ERICA trial, by a combination of The Dennis and Mireille Gillings Foundation, The University of Exeter, Cancer Research UK and the University of Exeter Medical School. No funding body was involved in the design of the study, in the collection, analysis or interpretation of data, or in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

EF participated in study design, wrote the protocol, undertook screening, full-text review, data extraction and analysis, and wrote the paper. GA and JC participated in study design and protocol writing, interpretation of the results and helped to revise and critically review the paper. AB, BW, DL and ES participated in screening and full-text review and helped revise and critically review the paper. WH participated in revising and critical review of the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Emily Fletcher.

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication.

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fletcher, E., Burns, A., Wiering, B. et al. Workload and workflow implications associated with the use of electronic clinical decision support tools used by health professionals in general practice: a scoping review. BMC Prim. Care 24, 23 (2023). https://doi.org/10.1186/s12875-023-01973-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12875-023-01973-2

Keywords