Patients should not use The Leapfrog Group’s Hospital Safety Score, which gives hospitals an A to F letter grade based on their likelihood of causing harm, because it’s inaccurate, the American Hospital Association said this week.
The scoring applies a 26-measure questionnaire that is biased, uses unreliable measures, applies variable weights to the same measures for different groups of hospitals, and contains significant errors, AHA President and CEO Rich Umbdenstock said in a five-page letter to Leah Binder, Leapfrog President and CEO.
Further, Umbdenstock said, “we are hearing from enough of our members about significant data issues that we are concerned about the manipulation of the data.”
On Wednesday, Binder responded, addressing each of the AHA’s three points. She said Umbdenstock is incorrect or misinformed in many of his claims. “With regard to the idea that Leapfrog deliberately manipulated data: This is a very serious charge for you to make without offering a single example to support it. We will launch a full investigation of any such example should you find one,” she wrote.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
The Leapfrog Group, a non-profit organization launched by employers who wanted to improve hospital quality of care, released safety scores on June 5 for 2,651 hospitals based on the occurrence of preventable events as falls, pressure ulcers, or bloodstream infections, and whether they have safety systems in place to prevent errors and complications. Hospitals retaliated in anger, saying they were caught off guard, and heatedly criticized the survey results.
Days later, some in the hospital industry went further, saying they suspected Leapfrog was trying to drum up more support for its annual hospital survey, alleging that if a hospital participated in Leapfrog’s data collection, they got a better safety grade.
The AHA’s three main criticisms and Binder’s responses are as follows:
1. Bias toward hospitals participating in The Leapfrog Group’s survey
“Chief among our concerns is that the methodology The Leapfrog Group uses appears to favor its own survey over other similarly reliable sources of information,” Umbdenstock said.
Of all the hospitals in the U.S., only about 1,000 participate in the Leapfrog Group’s voluntary reporting program, which collects information on whether the hospital uses a computerized provider order entry system (CPOE) and whether it has dedicated intensivists in the intensive care unit.
For hospitals that don’t voluntarily participate in the Leapfrog survey, a secondary source was used, such as the American Hospital Association’s own Health Information Technology annual survey.
Leapfrog participant hospitals can earn 100 points toward a high letter grade in each of those categories, but non-participating hospitals judged on the AHA’s HIT survey, can get only 15 points.
“Therefore it is unclear why The Leapfrog Group would give so little weight to an answer to a similar question in the AHA survey,” he wrote.
“By assigning vastly different point scores to similar information derived from reliable secondary sources, we are concerned that the scorecard can lead patients to inappropriate conclusions,” he wrote.
Yale-New Haven Hospital is a good example. While it has a fully-functional electronic health record and 137 full-time equivalent intensivists on staff, it received only 30 points instead of 200 because it doesn’t participate in the Leapfrog survey. That meant Yale-New Haven got a C “when we believe it rightfully should have received an ‘A.’ ” Umbdenstock wrote.
Binder replied that with respect to the AHA’s survey regarding the CPOE and intensivist measures, “is far less information than what is required of hospitals that report to the Leapfrog Hospital Survey.” In order to earn full-credit, “hospitals not only have to demonstrate a high level of adoption, but also take a six-hour simulation test to prove their system works safely.”
She said Umbdenstock’s criticism “is a misreading” of the AHA survey’s scoring algorithm.
With respect to a hospital getting a C that should have gotten an A, Binder replied that hospital’s score “was hurt by its lower than average adherence to surgical care guidelines, rates of hospital-acquired conditions, and lower than average performance on other measures included in the Hospital Survey Score.”