Hillary Thurman is looking for some insight, She writes, "We have recently standardized hemostasis instruments across our system. The rest of the system has been using a combined geometric mean for their PT reagent (same lot, same instruments), but my site has never been involved in this. In addition, they do not change the geomean between reagent lots unless the new lot geomean is more than +/- 0.3 from the current lot."
"My site has always calculated an individual geomean for each instrument performing testing for each lot of reagent. Is a systemwide geomean possible? Responsible? Should we be keeping the geomean constant? I am not finding any guidance in the CLSI or WHO documents to address these situations."
Response from my colleague and hemostasis expert Bob Gosselin, who is currently in Panama meeting with the International Council for Standardisation in Haematology: "Well, it seemed like the easiest thing to do is the math! Is there really any clinically significant difference (AKA dosing change)? Most opinion leaders stipulate an allowable INR difference between 0.3–0.5. Attached are some examples of calculated INRs using MNPT +/- 0.3 s and different ISIs [Click]. Pretty underwhelming, especially if ISI is low, so keeping a geomean constant between sites/analyzers if within 0.3 s seems reasonable."
"However, what is unclear from this conversation is how do the facilities determine the geomean, and how do they ascertain that other sites/instruments are within the 0.3 threshold? Is that just done once? That might be suitable if 1) their EQA data suggests an equivalence between sites/instruments, or 2) these sites share samples of varying INRs to verify equivalence. This is a requirement if patient samples from within their institution may be tested on all of these instruments and/or sites. They should certainly have some robust documentation of their practice, including a policy dictating their checks/balances including comparison frequency and acceptability/threshold limits between sites and instruments of these comparison samples. I would select comparison samples that span the INR therapeutic range (~1.5–4.0), and I would also include at least one comparison INR > ;5.0. Their latter question remains to be answered until we have a little more information about their laboratory practice."
Here is an excerpt from DeVries H, Fritsma GA. Chapter 2, Quality assurance in hemostasis testing in Keohane EM, Otto CN, Walenga JM. Rodak's Hematology; Clinical Principles and Applications, 6th Edition, 2020 Elsevier. [Courtesy of Heather DeVries]
Each site validates its instrument-reagent systems, perhaps relying on central laboratory support. Subsequently, the system is validated as a whole. The QC specialist verifies performance across the system with comparability studies, which resemble proficiency surveys. These are required at least twice a year, reflecting proficiency survey agency schedules. Typically, the central laboratory QC specialist prepares subject specimens to distribute to satellite facility personnel. For coagulation, although fresh plasma would be ideal, volume requirements may necessitate the use of frozen or preserved pooled plasma, preserved aliquots from proficiency agencies, or QC materials. QC material is only acceptable if all the sites’ reagents and QC materials share the same lot number, which may not apply if the coagulometers are different. For hematology, patient anticoagulated whole blood specimens or preserved QC materials may be used. System-wide comparison efforts are most effective when all facilities are using the same reagent lots and identical or at least similar equipment. In any event, the QC specialist compares satellite facility specimen results to analyze each location’s accuracy. Outliers trigger troubleshooting and remedial instruction.
Multi-Site Reference Interval and Therapeutic Range
Once an assay is validated, each facility develops an RI and, when applicable, a therapeutic range. To reduce the inevitable confusion generated from multiple RIs, the central laboratory may develop and publish a system-wide RI from individual facility intervals, presuming the individual RIs are similar. For this purpose, the QC specialist employs a statistical program such as EP Evaluator® (Data Innovations), which the specialist may choose to make available throughout the system for data collection. The computed “central” RI is distributed throughout the system and replaces individual RIs.