Choose 7 different articles to critique over the six international patient safety goals. Remember, the six international patient safety goals are:
Ensuring correct patient identification
Ensuring effective communication
Ensuring the safety of high alert medications
Ensuring correct site, correct surgery, correct procedure.
Preventing healthcare associated infections
Preventing patients fall
The 7 articles are not required to be over the same topic as long as it falls within the international patient safety goals.
Research and applications
Understanding and preventing wrong-patient
electronic orders: a randomized controlled trial
Jason S Adelman,1,2 Gary E Kalkut,1,3 Clyde B Schechter,5,6 Jeffrey M Weiss,1,4
Matthew A Berger,1,2 Stan H Reissman,9 Hillel W Cohen,5 Stephen J Lorenzen,7
Daniel A Burack,8 William N Southern1,2
1
Correspondence to
Dr Jason S Adelman,
Montefiore Medical Center, 111
East 210th Street, Bronx, NY
10467, USA;
[email protected]
Received 21 April 2012
Accepted 4 June 2012
Published Online First
29 June 2012
ABSTRACT
Objective To evaluate systems for estimating and
preventing wrong-patient electronic orders in
computerized physician order entry systems with
a two-phase study.
Materials and methods In phase 1, from May to
August 2010, the effectiveness of a retract-and-reorder
measurement tool was assessed that identified orders
placed on a patient, promptly retracted, and then
reordered by the same provider on a different patient as
a marker for wrong-patient electronic orders. This tool
was then used to estimate the frequency of wrong-patient
electronic orders in four hospitals in 2009. In phase 2,
from December 2010 to June 2011, a three-armed
randomized controlled trial was conducted to evaluate the
efficacy of two distinct interventions aimed at preventing
these errors by reverifying patient identification: an
ID-verify alert, and an ID-reentry function.
Results The retract-and-reorder measurement tool
effectively identified 170 of 223 events as wrong-patient
electronic orders, resulting in a positive predictive value of
76.2% (95% CI 70.6% to 81.9%). Using this tool it was
estimated that 5246 electronic orders were placed on
wrong patients in 2009. In phase 2, 901 776 ordering
sessions among 4028 providers were examined.
Compared with control, the ID-verify alert reduced the
odds of a retract-and-reorder event (OR 0.84, 95% CI 0.72
to 0.98), but the ID-reentry function reduced the odds by
a larger magnitude (OR 0.60, 95% CI 0.50 to 0.71).
Discussion and conclusion Wrong-patient electronic
orders occur frequently with computerized provider order
entry systems, and electronic interventions can reduce
the risk of these errors occurring.
BACKGROUND
Currently, at least 70 000 US physicians use
computerized provider order entry (CPOE) systems
to place orders.1 2 This number is expected to rise
sharply in coming years as hospitals take advantage
of federal incentives for adopting electronic health
record technology.3 Although CPOE systems are
associated with a reduction in some types of
medical errors,4e7 certain types of errors may occur
frequently in these systems, including placing
orders on the wrong patient.8e13 To date, there has
neither been a reported method for ef?ciently
measuring wrong-patient electronic orders, nor
a proved intervention for preventing them. To
address this, we aimed to develop a reliable method
to measure the frequency of wrong-patient electronic orders, study their underlying root causes,
J Am Med Inform Assoc 2013;20:305310. doi:10.1136/amiajnl-2012-001055
and test electronic interventions designed to avert
them in a randomized controlled trial.
In March 2011 the Institute for Safe Medication
Practices published a case report of a physician who
accidentally ordered a sedative and paralytic agent
for the wrong patient using a CPOE system, which
resulted in respiratory arrest and death.14 The
danger of wrong-patient electronic orders was
further highlighted by one hospitals report that
after implementing a CPOE system, medications
were prescribed for the wrong patient several times
per month.15 In 2003, the US Pharmacopeia
analyzed 7029 voluntarily reported medication
errors over a 7-month period and found a mean of
nine wrong-patient errors at each of 120 participating institutions using a CPOE system.16 This
report probably underrepresented the true extent of
wrong-patient electronic orders, as voluntary
reporting is known to be an unreliable method for
identifying errors.17 18 Automated surveillance of
electronic clinical data has been demonstrated to
be a more effective approach for identifying
errors,19e21 but no automated method for identifying wrong-patient electronic orders has been
described. We developed a simple and reliable
automated method for measuring wrong-patient
electronic orders that could be used at any
institution with an electronic ordering system.
We hypothesized that sometimes wrong-patient
orders are recognized by the orderer shortly after
entry, promptly retracted, and then reentered on
the correct patient. Koppel et al22 found that
medication orders discontinued within 2 h of
placing the order were a good predictor for
prescribing errors, (positive predictive value 55%,
95% CI 46% to 64%). Such errors caught and
corrected by the ordering provider before being
carried out are examples of near miss errors, which
have been shown to occur up to 100 times more
frequently than adverse events.23 Safety research
has demonstrated that near miss errors share the
same causal pathways as errors that cause harm.23
Tools that measure near miss errors may thus
uncover faulty system designs that can lead to
harmful errors, and interventions that prevent near
miss wrong-patient electronic orders should also
prevent the wrong-patient electronic orders that
reach the patient and cause harm. Accordingly, the
Agency for Healthcare Research and Quality has
included near miss errors in the common formats,
the standard de?nitions used to facilitate the
collection and reporting of patient safety events to
patient safety organizations.24
305
Downloaded from https://academic.oup.com/jamia/article-abstract/20/2/305/897530 by guest on 16 March 2020
Departments of Medicine,
Albert Einstein College of
Medicine, Montefiore Medical
Center, Bronx, NY, USA
2
Division of Hospital Medicine,
Albert Einstein College of
Medicine, Montefiore Medical
Center, Bronx, NY, USA
3
Division of Infectious Diseases,
Albert Einstein College of
Medicine, Montefiore Medical
Center, Bronx, NY, USA
4
Division of General Internal
Medicine, Albert Einstein
College of Medicine, Montefiore
Medical Center, Bronx, NY, USA
5
Departments of Medicine
Epidemiology and Population
Health, Albert Einstein College
of Medicine, Montefiore
Medical Center, Bronx, NY, USA
6
Family and Social Medicine,
Albert Einstein College of
Medicine, Montefiore Medical
Center, Bronx, NY, USA
7
Boston College, Chestnut Hill,
Massachusetts, USA
8
Harvard University, Cambridge,
Massachusetts, USA
9
Emerging Health Information
Technology, Bronx, New York,
USA
Research and applications
METHODS
The research protocol was designed as a two-phase study within
Monte?ore Medical Center, an academic medical center in the
Bronx, New York, consisting of three general and one childrens
hospital, 1500 inpatient beds, using a Centricity CPOE system
(GE Healthcare, Wisconsin, USA).
Phase 1: evaluating the performance of the retract-and-reorder
measurement tool
To measure wrong-patient electronic orders, we developed the
retract-and-reorder measurement tool that identi?es orders,
including medications, blood tests, imaging, and general care
orders, placed on a patient that were retracted within 10 min,
and then reordered by the same provider on a different patient
within 10 min of retraction. Orders were not identi?ed as
potential errors if they were reordered on the initial patient by
any provider within 24 h of retraction.
To determine if the retract-and-reorder events identi?ed by the
measurement tool represent true wrong-patient orders, we
conducted twice-daily semistructured phone interviews with
providers who were identi?ed by the retract-and-reorder tool
from 21 May to 18 August 2010. Providers were contacted
within 12 h of the event. After obtaining oral consent, providers
were asked if the event represented a true wrong-patient error,
and if so, to classify the type of error as juxtaposition,
interruption, or other.
The primary endpoints of phase 1 included the proportion of
retract-and-reorder events that were true positive wrong-patient
electronic orders based on the provider interviews, and the
overall frequency of retract-and-reorder events. A secondary
measure included the proportion of true positive wrong-patient
electronic orders categorized as juxtaposition errors, interruption
errors, or other errors.
In addition, each retract-and-reorder event was independently
classi?ed by two study physicians for the potential for harm if
the order had been carried out. Harm was classi?ed as clinically
signi?cant, serious, or life-threatening.26 Any disagreements
were resolved by consensus. To determine if our results were
dependent on the speci?c de?nition of retract-and-reorder
events, we performed a sensitivity analysis using several
combinations of time-to-retraction and time-to-reorder
intervals.
306
Phase 2: intervention efficacy trial
After phase 1 was completed, two interventions were developed
to prevent wrong-patient electronic orders: an ID-verify alert
and an ID-reentry function. The ID-verify alert is triggered on
opening the order entry screen, and displays the patients name,
gender and age. Using a single click response, a provider must
acknowledge they are ordering on the correct patient before they
can proceed. The ID-reentry function blocks access to the order
entry screen until the provider actively reenters the patients
initials, gender and age. After establishing the effectiveness of
our measurement tool in phase 1, we performed a three-armed
randomized controlled concurrent trial to investigate the effectiveness of both interventions in preventing wrong-patient
electronic orders compared with controls. As we were not able
to measure wrong-patient electronic orders directly, we used the
retract-and-reorder tool developed in phase 1 to measure retractand-reorder events. All providers, including attending physicians,
residents, physician assistants, registered nurses, nurse practitioners and pharmacists who placed orders on inpatients from
16 December 2010 to 17 June 2011 were randomly assigned
always to receive either the ID-verify alert, the ID-reentry
function, or no intervention. The requirement for consent was
waived by the institutional review board. Although it was not
possible to blind the participants to their study group assignment, all data extraction, management, and analyses were
carried out with study personnel unaware of study group
assignment. Study group assignments were unblinded only after
all analyses were complete.
The primary endpoint of phase 2 was the proportion of
ordering sessions that contained retract-and-reorder events as
a marker for wrong-patient electronic orders. As a secondary
measure, we calculated the time added to a providers daily work
for each intervention compared with control by measuring the
interval between request and access granted to the order entry
screen.
The research protocol was approved by the institutional
review
board,
and
registered
at
clinicaltrials.gov
(#NCT01262053). A data and safety monitoring board blindly
reviewed interim results at 6-week intervals, and also reviewed
any safety events that came to attention.
Statistical analysis
In phase 1, the positive predictive value of the retract-andreorder tool was calculated by dividing the number of con?rmed
wrong-patient order sessions by the total number of retract-andreorder sessions surveyed by phone interview. Then, the
frequency of wrong-patient electronic orders in 2009 was estimated by multiplying the frequency of retract-and-reorder
events by the positive predictive value of the retract-and-reorder
measurement tool. Results for phase 1 were reported as rates,
with retract-and-reorder events and estimates of wrong-patient
electronic orders expressed per 100 000 orders, and grouped by
provider type, order type, visit type and potential harm.
In phase 2, demographic descriptions of participant providers
were obtained from the medical center administrative database
and linked to the provider records in our data. All providers using
the CPOE system participated in the study; there was no
withdrawal or non-compliance with randomization assignment.
The unit of analysis for the randomized controlled trial was
the ordering session, during which providers selected a patient,
veri?ed the patients identi?cation if in an intervention group,
and then placed one or more orders. An ordering session was
classi?ed as a retract-and-reorder session when it contained at
least one retract-and-reorder event. Mixed-model logistic
J Am Med Inform Assoc 2013;20:305310. doi:10.1136/amiajnl-2012-001055
Downloaded from https://academic.oup.com/jamia/article-abstract/20/2/305/897530 by guest on 16 March 2020
Common types of errors that lead to wrong-patient orders are
juxtaposition errors, in which the wrong patient is selected from
a list of names by mis-clicking,8 10e13 and interruption errors, in
which providers are interrupted while toggling between
patients.13 Other causes of errors that have been reported
include small font size,8 failure to log off between providers,8
and the ease of transposing incorrect numbers when entering
medical record numbers.10 In a study that tracked provider eye
movements while using CPOE, none of the providers looked at
a second identi?er before selecting a patient from an alphabetical
list, even when two patients had the same last name and similar
?rst names.25 We hypothesized that requiring providers to
reaf?rm patient identi?cation when placing orders using
a second and tertiary identi?er would reduce wrong-patient
electronic orders.
In phase 1 of the present study we assessed the effectiveness
of an automated measure of wrong-patient electronic orders. In
phase 2 we tested the effectiveness of two interventions
designed to reduce these errors in a three-armed randomized
controlled concurrent trial.
Research and applications
providers did not remember the details of the event and were
excluded. Of the remaining 223 providers, 170 acknowledged
erroneously placing an electronic order on the wrong patient
resulting in a positive predictive value of 76.2% (95% CI 70.6%
to 81.9%). Sensitivity analysis was used to test several combinations of time to retraction and time to reorder, and demonstrated similar positive predictive values (data not shown). Of
the 170 wrong-patient orders identi?ed, 18 (10.6%) were classi?ed as juxtaposition errors, 137 (80.6%) as interruption errors,
and 15 (8.8%) as other.
Estimated frequency of wrong-patient order errors
We reviewed all nine million electronic orders placed by 6147
providers at Monte?ore in 2009. We found 6885 retract-andreorder events attributed to 1388 providers with a mean time to
retraction of 1 min and 18 s. Table 1 lists the frequency of
retract-and-reorder events by provider type, order type, unit
type, and by degree of potential harm. Applying the positive
predictive value found in phase 1, we estimated that the 6885
retract-and-reorder events represented 5246 wrong-patient electronic orders placed in 2009, an average of 14 wrong-patient
electronic orders per day. By this measure, approximately one in
six providers placed at least one electronic order on the wrong
patient, and approximately one in 37 patients admitted to the
hospital had an order placed for them that was intended for
another patient. All of these errors were near misses, self-caught
by the provider before causing patient harm.
Phase 2: intervention trial
RESULTS
Phase 1: performance of the retract-and-reorder measurement
tool
We interviewed 236 providers identi?ed by the automated
retract-and-reorder measurement tool. Of those contacted, 13
Table 1
During the randomized intervention trial, we examined a total
of 901 776 ordering sessions, which represented 3 281 544 inpatient orders, among 4028 providers from 16 December 2010 to 17
June 2011 (?gure 1). Table 2 depicts provider demographic
characteristics.
Retract-and-reorder events and wrong-patient orders in 2009 by provider type, order type, visit type, and degree of harm
Total orders
Totals
By provider type
Physicians
Physician assistants
and nurse practitioners
Nurses
Pharmacist
Other/unknown
By order type
Radiology
Lab
Medications
Nursing orders
Other
By visit type
Inpatient
Emergency department
Outpatient
Ambulatory surgery
By potential for harm
Life threatening
Serious
Clinically significant
Total no of
orders
Retract-and-reorder
events*
Retract-and-reorder events
per 100 000 orders
Wrong-patient orders
per 100 000 ordersy
9 024 723
6885
76
58
4 558 198
2 346 463
3606
2283
79
97
60
74
1 238 011
273 857
608 194
543
241
212
44
88
35
33
67
27
803 584
4 109 802
2 414 251
929 402
767 684
996
2605
2163
464
657
124
63
90
50
86
94
48
68
38
65
6 141 346
2 639 424
126 858
1 17 095
5193
1481
142
69
85
56
112
59
64
43
85
45
9 024 723
9 024 723
9 024 723
166
359
1274
2
4
14
1
3
11
*Retract-and-reorder events are orders placed on one patient, promptly retracted, and reordered by the same provider on a different patient as a marker for wrong-patient electronic orders.
yWrong-patient orders estimated by multiplying retract-and-reorder events by the positive predictive value of the retract-and-reorder tool (76.2%).
J Am Med Inform Assoc 2013;20:305310. doi:10.1136/amiajnl-2012-001055
307
Downloaded from https://academic.oup.com/jamia/article-abstract/20/2/305/897530 by guest on 16 March 2020
regression with provider level random effects was used to estimate the OR for retract-and-reorder sessions in the two intervention arms compared with the control arm. Based on the
number of providers using the system and the distribution of
ordering sessions per provider per day, simulations revealed that
the planned study duration of 6 months provided 98% power to
detect a 50% reduction in the odds of retract-and-reorder
sessions in either of the intervention arms, using a two-tailed
z-test of the coef?cient of a study-arm indicator at the 0.05
signi?cance level. Because the Data Safety Monitoring Board
was authorized to recommend early termination of the study if
conclusive results in either direction emerged, an OBriene
Fleming style alpha-spending plan for four scheduled comparisons relying on the LaneDeMets algorithm was implemented.27
Results for the three groups were presented to the Data Safety
Monitoring Board in a blinded fashion without identifying
which group corresponded to which arm of the study.
To study potential delays introduced by the interventions, we
extracted from the order entry system a complete enumeration
of the duration of the interval from ordering session request to
unlocking of the order pad for all ordering sessions in the study.
We presented the mean additional ordering session times per
session in each study arm.
Datasets were collected using Fair Isaacs Blaze ruleset
embedded within GE Healthcare Centricity Enterprise electronic
medical record versions 6.1 and 6.6, and analyzed using Stata
V.11.2MP after importation using Stat Transfer V.9.
Research and applications
4028 eligible providers
4028 randomized
1419 randomized to
control
1352 randomized to
ID-verify alert
1257 randomized to
ID-reentry function
1419 control
analyzed
1352 ID-verify alert
analyzed
1257 ID-reentry function
analyzed
The effect of the interventions on retract-and-reorder events
are presented in table 3. The rates of retract-and-reorder sessions
in the three study arms were 1.5 per 1000 ordering sessions in
the control group, 1.2 per 1000 in the ID-verify alert group, and
0.9 per 1000 sessions in the ID-reentry function group.
Compared with control, the ID-verify alert signi?cantly reduced
the odds of a retract-and-reorder event (OR 0.84, 95% CI 0.72 to
0.98). The ID-reentry function signi?cantly reduced the odds of
a retract-and-reorder event by a larger magnitude (OR 0.60, 95%
CI 0.50 to 0.71).
The mean additional ordering time per session resulting from
the interventions was 0.5 s for the ID-verify alert, and 6.6 s for
the ID-reentry function (table 3).
DISCUSSION
We developed a tool to measure wrong-patient electronic orders
in CPOE systems, and examined its performance to ?nd errors in
our system. Using this tool, we identi?ed 6885 retract-andreorder events in 1 year, which represent an estimated 5246
wrong-patient orders. Although all were near miss errors,
previous research has demonstrated that near miss errors share
a common causal pathway with errors that cause harm.23
Wrong-patient errors may thus represent a signi?cantly larger
hazard than was previously reported.16 We further demonstrated
that an ID-verify alert (single-click con?rmation of patient
identity) reduced wrong-patient electronic orders by 16%, while
an active ID-reentry function (requiring active reentry of iden-
Table 2 Demographic characteristics of providers in randomized control trial
Group
Attribute
Sex n (%)
Male
Female
Unknown
Provider type, n (%)
MD
RN
PA and NP
Unknown and other
Age, median (IQR)
Years since training, median (IQR)
Control (N[1419)
ID-verify alert*
(N[1352)
ID-reentry functiony
(N[1257)
425 (30.0%)
870 (61.3%)
124 (8.7%)
398 (29.4%)
835 (61.8%)
119 (8.8%)
373 (29.7%)
765 (60.1%)
119 (9.5%)
631
338
104
346
37
9.9
579
348
97
328
38
9.9
532
311
101
313
37
8.9
(44.5%)
(23.8%)
(7.3%)
(24.4%)
(18)
(16.0)
(42.8%)
(25.7%)
(7.2%)
(24.3%)
(19)
(14.3)
(42.3%)
(24.7%)
(8.0%)
(24.9%)
(18)
(14.0)
*ID-verify alert required a single click confirmation of the patients name, gender, and age.
yID-reentry function required the provider accurately to reenter the patients initials, gender, and age into a data field.
MD, physician; NP, nurse practitioner; PA, physician assistant; RN, registered nurse.
308
J Am Med Inform Assoc 2013;20:305310. doi:10.1136/amiajnl-2012-001055
Downloaded from https://academic.oup.com/jamia/article-abstract/20/2/305/897530 by guest on 16 March 2020
Figure 1 Patient enrollment and randomization.
ti?ers) achieved a 41% reduction. All hospitals that implement
CPOE systems should consider measuring retract-and-reorder
events to estimate the frequency of wrong-patient orders, and
optimize their software to minimize these errors.
Using semistructured phone interviews, we identi?ed 170 of
223 retract-and-reorder events as wrong-patient electronic
orders, resulting in a positive predictive value of 76.2% (95% CI
70.6% to 81.9%). A common explanation for false positive
retract-and-reorder events included providers who, for example,
while on total parenteral nutrition (TPN) rounds in the
neonatal intensive care unit, cancelled a TPN order shortly after
placing it for a reason other than a wrong-patient error, and then
moved to the next neonatal intensive care unit patient in need of
TPN. Providers on warfarin rounds and potassium rounds gave
similar explanations. Although these events involved rapidly
retracted orders, they did not represent wrong-patient errors.
In addition to estimating the frequency of wrong-patient
electronic orders, the automated retract-and-reorder tool allowed
us to interview providers involved in these events in near realtime to determine the cause of the error. Although previous
research highlighted juxtaposition errors as a prominent mechanism,9e11 13 we found that only 11% of the errors were
reported to result from this cause. The more commonly reported
cause was interruptions, accounting for 81% of the errors. The
ease of toggling between patients in CPOE systems and the
frequent interruptions of a busy hospital unit were found to
induce wrong-patient electronic orders. However, providers may
be more likely to catch interruption errors than other types of
errors. If so, we have underestimated the frequency of noninterruption errors. Regardless of the mechanism of error, our
interventions were designed to avert wrong-patient electronic
orders by requiring that providers reaf?rm patient identi?cation
before placing orders.
Through our clinical trial, we evaluated two risk-reduction
strategies. The ID-reentry function proved more effective than
the ID-verify alert, with error reductions of 41% and 16%,
respectively. While a 41% error reduction is signi?cant, the
ID-reentry failed to eliminate completely all retract-and-reorder
events. This may be due partly to some providers inattentively
reentering patients initials, gender and age without carefully
verifying identity. In addition, 23.8% of retract-and-reorder
events did not represent wrong-patient errors, and thus were not
likely to be impacted by the ID-reentry function. Human factors
and usability experts, who are trained to optimize the
Research and applications
Table 3
Results of randomized controlled trial evaluating two interventions designed to reduce wrong-patient electronic orders
No of orders
No of order sessions
Rates of retract-and-reorder sessions per 1000 order sessions
OR (95% CI)
Percentage reduction compared with control (p value)
Session ordering times
Mean additional single session ordering time
Control
ID-verify alert*
ID-reentry functiony
n¼1 173 693
n¼316 981
1.5
n¼1 038 516
n¼293 621
1.2
0.84 (0.72 to 0.98)
16% (p¼0.03)
n¼1 069 335
n¼291 174
0.9
0.60 (0.50 to 0.71)
41% (p
Purchase answer to see full
attachment

Recent Comments