See what the six pilot sites from the SIDM-IHI collaborative did (2017-18) to improve the diagnostic process.
Testing the use of innovative technology to accurately interpret unstructured provider notes, specifically, the medical decision making (MDM) portion of the electronic health records, to obtain insight into the cognitive aspects of the diagnostic process. If methods are found to be reliable, the team hopes to then validate them to screen large numbers of electronic health records (EHRs) to investigate the epidemiology and derive interventions to mitigate diagnostic errors in the context of acute care. The MDM portion of the EHR is the only surrogate that captures the cognitive aspects of decision making. UM sought to field test a highly-limited scope intervention in the form of a scripted, structured patient problem representation statement template to increase the influence of System 2 (slow thinking) over System 1 (fast thinking) and thus potentially reduce diagnostic mishaps. This is meant to demonstrate that by structuring the MDM of the pediatric emergency department visit notes in a certain way through a template, the team could potentially influence, affect, or at least better monitor the “in the moment” diagnostic decision-making process of a pediatric emergency department patient encounter.
Northwell, over the last six months, developed and deployed PDSA trials to reduce diagnostic errors within the Ambulatory (Community/Faculty Practices), Emergency Department (LIJMC) and Inpatient (Lenox-Adult/ Cohen-Pediatrics) clinical settings by focusing on the roles of the Patient, Family and Caregiver. They effectively carried out enhanced patient communication using a scripted “Teach-Back“ intervention in 507 trials during a five-month period of time. Providers were asked to explain a given diagnosis to a patient and have the patient repeat back what they understood about their diagnosis/diagnoses at the end of the encounter. On the basis of the defining Diagnostic error as: “The failure to (a) establish an accurate and timely explanation of the patient’s health problem(s) or (b) communicate that explanation to the patient”, the team predicted that Teach-Back employed routinely during the patient encounter in an effort to improve communication would decrease diagnostic error. The effectiveness of this approach was measured utilizing Exit Surveys to elicit the following data points: 1) Did patients receive the intervention as designed? That is, did patients perceive they were being engaged in an enhanced-communication intervention? 2) Did patients respond positively to the intervention? 3) Were patients able to demonstrate a clear understanding of their diagnosis/diagnoses following the intervention? Future quality improvement initiatives surrounding patient engagement could utilize Press-Ganey’s patient satisfaction metrics to gauge longer-term improvement trends.
During this prototyping collaborative, the team introduced a framework for diagnostic deliberation, the “diagnostic time-out” in efforts to circumvent cognitive biases that may interfere with medical decision-making. The diagnostic time-out asks the medical team to pause and structures the discussion with two questions: 1) What are the two to three most likely diagnoses for this patient? and 2) What is at least one life-threatening/more severe diagnosis that we must consider for this patient? Ideally, the diagnostic time-out would occur during morning rounds in collaboration with patients and their families. The hypothesis is that by fostering an environment to support active discussion regarding diagnosis, there would be an improvement in the differential diagnoses. The team examined resident and attending physician documentation of differential diagnoses on initial history and physicals (H&Ps) of patients admitted to the general pediatric ward with abdominal pain, a population which lends itself to a broad differential diagnosis but often conveys a diagnostic challenge when significant evaluation, including laboratory and imaging studies, in the emergency department is inconclusive. For the baseline, they reviewed 67 charts of patients admitted with nonspecific abdominal pain between July and December 2017. The diagnostic time-out intervention was introduced in January 2018. Documentation including two or more likely diagnoses and consideration of at least one life-threatening/more severe diagnosis were counted. Post-intervention, a total of 98 patient charts were reviewed between January and June 2018. The team monitored seven-day and 30-day readmission rates in the patient population. Our project aim was to increase the percentage of well-documented differential diagnoses on initial H&Ps from a baseline of 76% to 95%. Despite several iterative PDSA cycles and interventions over the course of this collaborative, we were unable to reach our goal within 6 months. However, the dialogue around differential diagnosis and diagnostic error has been robust and we are hoping to sustain the momentum in order to innovate and drive change.
The members of the hospital medicine program developed a triggered 2-provider diagnostic error review in parallel with a provider-level diagnostic error feedback mechanism.
Cases were identified utilizing four triggers-- seven-day all-cause hospital readmissions, autopsy, inpatient mortality, and self-report. These cases were reviewed by two hospital medicine physicians using the SaferDX tool to determine if a diagnostic error had occurred and the impact of said error. Root causes of the error were identified in collaboration with involved providers; this exploration was guided by the DEER Taxonomy tool. Feedback regarding the cases and trends was given to the involved provider as well as the hospital medicine physician group; providers were first contacted by email and then invited to verbal discussion of diagnostic process with focus on potential for systems improvement.
From January-June 2018, there were 4458 discharges from the hospital medicine service with 201 (4.5%) seven-day readmissions; 196 readmissions underwent review. Seventeen (8.7%) were found to contain diagnostic errors representing a breadth of unique diagnoses. Sixteen had a moderate impact on patients including short-term morbidity, increased length of stay, or invasive procedure. The most common categories of root cause included Laboratory/Radiology Tests and Assessment; the most common subcategories were failure/delay in ordering needed test(s), erroneous clinician interpretation of test, and failure/delay to recognize/weigh urgency.
There were several important lessons learned from this work:
1) It is possible to align diagnostic improvement with existing quality/financial institutional priorities such as readmissions
2) A symptom or diagnosis focused approach to diagnostic error exploration (as opposed to more general trigger) may allow for tailored interventions
3) Clinician interpretation of lab/radiology testing may be an area of focus for inpatient diagnosis; solutions could utilize the electronic medical record and artificial intelligence
Radiology orders that are not completed, or completed and not followed up increase the risk of diagnostic errors. The Outpatient Radiology Results Notification Engine (“Results Engine”) is designed to send notifications related to outpatient orders/results to Tufts MC Physicians who place these orders. The workflow starts when a Radiology order is placed for a patient and an appointment is scheduled. If the appointment date is less than five days from the date the appointment was scheduled, or if the order is for an inpatient admission, then the Results Engine will not track that order because it is assumed to be at low risk of missed follow up.
Otherwise, the Results Engine sets a timer to expire 14 days from the time/date of the outpatient test appointment. The Results Engine began generating automated email alerts in April 2016 starting with Rheumatology and was expanded to pulmonary and adult primary care outpatient clinics. The Results Engine capability can help clinicians ensure that planned follow-up testing is completed and followed up for abnormal pap smears, lung nodules, skin biopsies, colon polyps, and mammograms. It can also be used to alert clinicians about delayed and canceled high-risk referrals (e.g. surgical evaluation for breast nodule or gastroenterologist for rectal bleeding).
The project is well aligned with efforts to improve diagnostic error and malpractice liability associated with ambulatory care. Already, participating clinicians have identified several occasions where an important missed imaging test was detected using the Results Engine and led to successful completion of the test. Our initiative is a potentially generalizable approach for identifying missed and delayed test results, even in health systems employing multiple different EMRs. Email notifications are sent securely within the medical center firewall. Like many projects, implementation is more complicated than one might imagine, even if the original plan was to disseminate an established tool. Future work should address methods to accelerate and test the deployment of this approach, to examine its impact on busy providers, and to prevent serious diagnostic delays. The Results Engine makes the invisible (missed visit) visible.
The VTE advisor is a mandatory clinical decision support tool embedded in the EMR, designed to help physicians risk stratify patients for venous thromboembolism (VTE) and guide VTE prophylaxis recommendations. It is the only decision support tool that is part of the admission order set. Acceptance and appropriate use of the tool have been low for a variety of reasons (the perception that it is too time-consuming to use, poor functionality during the initial rollout, lack of knowledge about its purpose, lack of buy-in to the value of decision support, etc.). The medical intermediate care unit, for instance, audit of VTE advisor completion during the month of November revealed that 60% of the patients were characterized as low risk. Moreover, when the tool is used, it is often completed incorrectly or overridden. It was believed that among providers who use the tool correctly, there was an opportunity to improve the cognitive process associated with risk stratification for VTE prophylaxis and increase the number of patients who receive optimal VTE prophylaxis. This project sought to improve utilization of the VTE tool by providing education on clinical decision support tools, requirement of VTE risk stratification as a core measure performance metric, ACCP recommendations for optimal VTE prophylaxis and session on correct completion of VTE advisor tool.
Driving Quality Improvement Efforts
Learn more about the work SIDM is doing to test and develop tools and techniques to improve the quality and safety of the diagnostic process.