Clinical Pathology

Development and Testing of a Laboratory Information System–Driven Tool for Pre–Sign-Out Quality Assurance of Random Surgical Pathology Reports

Abstract

We describe the development and testing of a novel pre–sign-out quality assurance tool for case diagnoses that allows for the random review of a percentage of cases by a second pathologist before case verification and release of the final report. The tool incorporates the ability to record and report levels of diagnostic disagreement, reviewers’ comments, and steps taken to resolve any discrepancies identified. It is expandable to allow for the review of any percentage of cases in any number of subspecialty or general pathology “benches” and provides a prospective instrument for preventing some serious errors from occurring, thereby potentially affecting patient care in addition to identifying and documenting more general process issues. It can also be used to augment other more conventional methods of quality control such as frozen section/final diagnosis correlation, conference review, and case review before interdisciplinary clinicopathologic sessions. There has been no significant delay in case turnaround time since implementation. Further assessment of the tool’s function after full departmental application is underway.

Image 1
Initial steps in the pre–sign-out quality assurance tool. When the originating pathologist’s electronic signature is entered (top left), a dialogue box appears indicating that the case has been selected for inclusion in the tool (top right). The case then automatically moves to the pathologist quality control (QC) review work list (bottom left), where it is not assigned to any one reviewing pathologist (arrows). On the originating pathologist’s work list (bottom right), the case appears shaded with the reminder “Pathologist QC Review: Pending” (arrows) in red until the reviewing pathologist completes his or her review.

Open in new tabDownload slide

Initial steps in the pre–sign-out quality assurance tool. When the originating pathologist’s electronic signature is entered (top left), a dialogue box appears indicating that the case has been selected for inclusion in the tool (top right). The case then automatically moves to the pathologist quality control (QC) review work list (bottom left), where it is not assigned to any one reviewing pathologist (arrows). On the originating pathologist’s work list (bottom right), the case appears shaded with the reminder “Pathologist QC Review: Pending” (arrows) in red until the reviewing pathologist completes his or her review.

Materials and Methods

Quality assurance (QA) and quality improvement activities in surgical pathology have received increasing attention in recent years and are a growing part of pathologists’ daily activity in academic and community practice settings. As public concern about medical costs and medical errors has grown, the entire health care system has responded by turning increased attention to quality-related issues. Pathology has participated in this movement, and the Association of Directors of Anatomic and Surgical Pathology has issued recommendations for anatomic pathology quality control and quality improvement since 1991.1,2

Despite somewhat nebulous definitions of quality and error in the context of surgical pathology, numerous approaches have been used attempting to measure and/or improve pathology practice, including retrospective and prospective reviews of all cases or a proportion of cases, mandatory second review of new malignancies, focused reviews of a subset of cases, and others.3–7 Such approaches can focus on specimen receipt and processing, the final diagnoses made by pathologists, or any intervening step(s). Although all of these methods have the potential to identify process and diagnostic errors, a system centered on the review of cases before they are signed out ensures that errors can not only be identified but also prevented in some cases, reducing the number of reports that require post–sign-out amendment and giving greater benefit to individual patients.

By using the various tools alluded to in the preceding paragraph, studies have revealed error rates ranging from 0.26% to 1.2% when all cases are reviewed prospectively by pathologists in the same institution and as high as 6.7% in retrospective blinded reviews.3,8 In prior reports, errors have been defined as interpretational (involving incorrect classification of tissue findings by a pathologist) or reporting (involving errors in specimen description, clinical information, or typographical/clerical errors in the physical report). Errors may also be classified as major (judged as having a significant real or potential effect on clinical care) or minor (without significant impact on patient care).9

The University of Pittsburgh Medical Center Department of Pathology (Pittsburgh, PA) uses a subspecialty-based practice, including “centers of excellence” (COEs) responsible for the diagnosis of cases in the thoracic, gastrointestinal, bone and soft tissue, neurologic, head and neck, genitourinary, breast and gynecologic, and transplant realms. Within this structure, we sought to develop and institute a pre–sign-out QA tool (PQAT) that could use the laboratory information system (LIS) to automatically and randomly select 8% of all cases for mandatory review by a second pathologist practicing in the same COE.

Results

Our department’s existing LIS, CoPathPlus (Cerner, Kansas City, MO), was subjected to several vendor-assisted software modifications to create the new PQAT. To accommodate existing departmental workflow, it was determined that cases should be randomly selected by the LIS for review before verification by the primary case pathologist. Furthermore, to ensure the randomness of the process, the case pathologist should have no knowledge that a given case was selected for review until the moment of verification/sign-out. Finally, once selected, the case should move to a second pathologist for independent review and should not be available for sign-out by the primary pathologist until the review is completed, and the system should allow for feedback from the reviewing pathologist.

Within this framework, modifications were made to the case work list and verification functions in CoPathPlus, allowing for 8% of each pathologist’s workload to be randomly selected by the LIS when the primary pathologist attempts to verify the case by pressing the sign-out button and entering his or her electronic signature. The primary pathologist is blinded to this activity by the LIS. When the case is selected, it moves automatically from the primary work list to a separate PQAT work list and awaits review by a second pathologist, accompanied by a dialogue box notifying the primary pathologist of the need for review. The reviewing pathologist, assigned by a predetermined work schedule, then reviews the slides and report, agrees or disagrees with the report as written, and enters his or her comments in the LIS before returning the case to the primary pathologist’s work list for final verification/sign-out.

Table 1

Levels of Agreement/Disagreement Used in 8% Pre–Sign-Out Quality Assurance Tool


Open in new tab

Following its development, the PQAT was applied to surgical pathology and cytology cases in our department. The initial phase involved 2 months of testing in the genitourinary and gastrointestinal COEs, followed by a rollout to other surgical pathology and cytology cases in the first months of 2009. Because of a very high percentage of a priori second review of cases in the transplant COE, this group has not been included in the PQAT, but could be in the future.

Table 1

Levels of Agreement/Disagreement Used in 8% Pre–Sign-Out Quality Assurance Tool


Open in new tab

Discussion

The resultant final surgical pathology or cytology report does not display the reviewer’s comments or any overt record of the case having been selected and reviewed for QA. These data are stored in the LIS, with the ability to generate reports for periodic review by the QA coordinator and the departmental QA and Patient Safety Committee Image 4. The final surgical pathology report contains only the reviewing pathologist’s name as a record of his or her consultation on the case, which appears in a unique “consulting pathologist” section. The percentage of cases selected for review can be adjusted to meet any departmental or institutional requirements, and the PQAT is easily expandable to any “bench” or pathologist in the department. Furthermore, the PQAT can be easily customized to randomly review an expanded subset of cases, such as biopsies, or a larger percentage of cases from an individual pathologist, such as a pathologist who is new to the system and is under a review period. The overall workflow of the process is illustrated in Table 2.

Preliminary data indicate that case TAT has not been significantly affected by the implementation of the PQAT. Between January and August 2009, 1,523 surgical pathology cases were subjected to review based on selection by the PQAT. The average TAT for the reviewed cases was 2.47 days, whereas that for all nonreviewed surgical pathology cases was 2.13 days (P = .84; paired t test). Of the 1,523 cases, 1,489 (97.8%) resulted in diagnostic agreement, 33 (2.2%) had minor disagreements, and 1 (0.07%) had a moderate disagreement. There were no major disagreements during this period.

We report the development and testing of a novel, LIS-driven PQAT that allows for prospective random review of a percentage of cases by a second pathologist. By focusing on pre–sign-out review, the tool allows for real-time QA activity rather than a retrospective process, permitting a direct and immediate effect on patient care by reducing potential serious errors in addition to departmental process improvements. It is particularly useful in a department such as ours, where the practice is based on subspecialty benches and in which there is no other routinely performed prospective review of cases. The automated and prospective nature of the process reduces the labor of pulling cases from slide files for retrospective review and allows real-time data collection and analysis through an embedded spreadsheet. Furthermore, anecdotal reports in the form of feedback from departmental faculty and clinical colleagues have been positive. The system was presented to the institutional Procedure Evaluation Committee, where it was approved and met with approbation from committee members from several clinical departments.

In our institution, the PQAT is currently being used to provide for random review of 8% of most cases in the surgical pathology subspecialties noted earlier, as well as in cytology. In addition, the 8% random review of cases before verification is augmented in our department by more traditional QA practices, including frozen section/final diagnosis correlation, intradepartmental consultations, case reviews at weekly sub-specialty conferences, and review of cases by a second pathologist in preparation for weekly clinicopathologic conferences such as tumor boards. The 8% review level was chosen by the departmental QA and Patient Safety Committee to result in an approximately 10% to 12% overall second review rate for most cases in the department when the PQAT is combined with these other QA modalities.

Where the PQAT is being used, all cases are considered for review by the tool, including cases subjected only to gross examination. Although the inherent nature of such “gross-only” cases makes them perhaps less likely to generate serious errors that have a major impact on patient care, we believe that their inclusion is important for identifying potential system and process errors, such as those that may occur in specimen accessioning, gross examination, and/or report generation. In addition, the randomness of the selection process is crucial to the prevention of selection biases that may occur when subjecting only a specific subset of cases (such as newly diagnosed malignancies) to review.

Aside from simple agreement, 3 levels of disagreement are available in the PQAT for selection by the reviewing pathologist. If the reviewer has determined that he or she does not agree, the next decision point is whether the level of disagreement is major or nonmajor, based on a medical judgment of whether there is a potential for significant impact on patient care. One example of a major discrepancy is a malignant diagnosis not reported or missed by the primary pathologist. Although no absolute criteria exist for this determination, this general approach to the subclassification of discrepancies into major and nonmajor has been used previously.2,4,6,7,10 Nonmajor discrepancies can be divided into those that are essentially of interest only to other pathologists (minor) and those that may carry some clinical importance but are of little direct consequence to the patient’s outcome (moderate). Although minor disagreements may be of importance primarily to other pathologists, they provide a venue for discussions on practice and approach to diagnosis and can, therefore, be a very useful part of the PQAT-associated activity.

When discrepancies are encountered, the primary and reviewing pathologists are required to discuss the case and to attempt to reach a resolution. If these 2 pathologists are able to agree on the case, the report is released, with or without modifications. When the reviewer and primary pathologist cannot reach an agreement, the case is referred to the director of the COE for review. If a satisfactory resolution cannot be reached among the 3 pathologists, a mutually agreeable outside consultant is chosen as an arbiter. The intent of this approach is to maximize the potential effect of the PQAT on real-time, patient-focused quality improvements and error reduction, an effect missing from retrospective approaches to case review. Although no major errors have been encountered in the initial phases of the use of the tool, this remains an important potential benefit of the process.

We have developed and implemented a PQAT that is based on departmental workflow in a subspecialty-oriented practice model and that uses the departmental LIS to randomly select a subset of all cases for review. Currently, 8% of most surgical pathology and cytology cases at our institution are being reviewed by a second pathologist within the same COE, but this level of case selection is adjustable depending on the level of monitoring needed. In addition, although the tool was developed to provide for random case selection, it can also be adjusted to allow for more focused selection of cases from difficult and/or controversial subspecialty areas. In this way, more problematic cases (possibly subsets identified during the random case selection and review) can be reviewed at an increased rate, augmenting the “baseline” QA activity. Finally, the tool is expandable and can be made applicable to any type of practice, including general pathology services. Data generation and analysis are underway after successful testing and implementation throughout most of the department, and we hope to more fully report our initial experience with 8% random case review in the near future.

Table 2

Workflow of Pre–Sign-Out Quality Assurance Tool


Open in new tab

Pre–sign-out quality assurance tool report of review outcomes. After the opinion of the reviewing pathologist has been entered, the outcome can be followed on the pre–sign-out review report, where the reviewer’s opinion appears (arrows). Although an 8% review is currently being used, both of the reports seen in Images 2 and 4 were generated at the 5% pre–sign-out review level, illustrating the ability to modulate the review level as needed.

1.

Association of Directors of Anatomic and Surgical Pathology

.

Recommendations on quality control and quality assurance in anatomic pathology

.

Am J Surg Pathol

.

1991

;

15

:

1007

1009

.

2.

Association of Directors of Anatomic and Surgical Pathology

.

Recommendations for quality assurance and improvement in surgical and autopsy pathology

.

Am J Surg Pathol

.

2006

;

30

:

1469

1471

.

3.

Frable
WJ

.

Surgical pathology: second reviews, institutional reviews, audits, and correlations

.

Arch Pathol Lab Med

.

2006

;

130

:

620

625

.

4.

Manion
E
Cohen
MB
Weydert
J

.

Mandatory second opinion in surgical pathology referral material: clinical consequences of major disagreements

.

Am J Surg Pathol

.

2008

;

32

:

732

737

.

5.

Nakhleh
RE

.

What is quality in surgical pathology?
J Clin Pathol

.

2006

;

59

:

669

672

.

6.

Raab
SS
Grzybicki
DM

.

Measuring quality in anatomic pathology

.

Clin Lab Med

.

2008

;

28

:

245

259

.

7.

Raab
SS
Grzybicki
DM
Mahood
LK

et al. 

Effectiveness of random and focused review in detecting surgical pathology error

.

Am J Clin Pathol

.

2008

;

130

:

905

912

.

8.

Raab
SS
Nakhleh
RE
Ruby
SG

.

Patient safety in anatomic pathology: measuring discrepancy frequencies and causes

.

Arch Pathol Lab Med

.

2005

;

129

:

459

466

.

9.

Raab
SS

.

Improving patient safety by examining pathology errors

.

Clin Lab Med

.

2004

;

24

:

849

863

.

10.

Renshaw
AA
Gould
EW

.

Measuring errors in surgical pathology in real-life practice: defining what does and does not matter

.

Arch Pathol Lab Med

.

2007

;

127

:

144

152

.

References

مقالات ذات صلة

زر الذهاب إلى الأعلى