Addressing how CMS weights clinical quality measures to calculate CMS Hospital Star Ratings may drive improvement.
Medical experts largely agree that the CMS Hospital Star Ratings system is not a perfect measure of hospital quality, but one new approach aims to change that.
The Star Ratings system, which looks at a number of quality measures that should ultimately indicate a positive hospital experience, aims to help patients make informed decisions about their healthcare. By making the five-star scale familiar and accessible to the public, CMS intends to help patients and their caregivers select a certain facility at which they can receive care.
But that system is flawed, many industry leaders have stated. The Hospital Star Ratings system distills hospital quality, an inherently complex topic, into a simple five-star rating, which some advocates say is reductive. Additionally, the ratings system can make apples to oranges comparisons, and with changes in the weighting of certain quality measures, it’s hard for hospitals to anticipate their quality rating for a given year.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Such was the case at Rush University Medical Center in Chicago. As reported in a new paper out of the University of Chicago Booth School of Business, Rush University Medical Center had long been a five-star hospital, at least until CMS made updates to the 2018 Hospital Star Ratings methodology.
“While in previous releases nearly all weight in the Safety of Care group was placed on the PSI-90 composite score, in the July 2018 release the weight shifted to Complications from Hip and Knee Surgeries, which impacts many fewer patients,” Dan Adelman, the Charles I. Clough, Jr. professor of Operations Management at Chicago Booth, wrote in an Operations Research study.
“As a result, many hospitals unexpectedly changed in their star ratings, including Rush University Medical Center which dropped from 5 stars down to 3 even though they had improved along many measures.”
This trend occurred because the CMS Hospital Star Ratings system uses a latent variable model, which links what Adelman calls an “unobservable variable” to certain hospital quality measures. If a quality measure is tied to that latent variable, it is given more weight when determining a hospital’s ultimate star rating.
The LVM model is inherently flawed, Adelman explained, principally because it keeps hospital leaders in the dark about where they can target practice improvements and makes certain changes in star ratings somewhat arbitrary. A hospital may see improvement in most quality measures, but if performance slips or stagnates for a measure tied to the latent variable, the hospital will be penalized.
“As a consequence, hospital administrators cannot reasonably anticipate how the weights will change over time, so as to decide on where to focus improvement efforts,” Adelman explained. “The fundamental problem is this: even if a hospital improves along every measure relative to all other hospitals, the LVM offers no guarantee that a hospital’s score will not decrease.”
Adelman developed his own methodology that works to improve on the CMS approach. The methodology gives every hospital its own set of measure weights that maximize scores compared to other hospitals. These weights, which Adelman said are subject to certain constraints, aim to highlight the areas where hospitals actually invest.
“While giving hospitals their own measure weights may seem to give them unfair advantage, our approach does not choose the weights arbitrarily,” he wrote in the paper. “Rather, every hospital receives its highest score possible and is subject to the same collection of constraints that ensure weights obey rational properties.”
And ultimately, these improvements should make things easier for the patient. The CMS Hospital Star Ratings are useful because they equip interested patients with the data necessary to make a treatment decision. They also fuel competition between hospitals. But when the data used to calculate the ratings is flawed, it is not helpful to anyone.
“My suggested approach rewards hospitals that do well and penalizes hospitals that don’t on the measures they actually report,” Adelman said in a statement. “It proposes an alternative approach that could one day help lead to a better system that allows patients to have more accurate information to make choices. The goal is to create a ratings system that allows competition between hospitals to be based on more realistic assessments of hospital quality and performance than it is today.”
The healthcare industry is increasingly focused on perfecting efforts to measure hospital quality and patient experience. In July 2019, a group of leading industry experts outlined where CMS can improve the HCAHPS survey, a key indicator of patient satisfaction and hospital experience.
The group, led by the Federation of American Hospitals and which included the American Hospital Association, America’s Essential Hospitals, the Association of American Medical Colleges, and the Catholic Health Association of the United States, sought to better understand how the industry can improve HCAHPS.
After ten years of being administered, the group argued that HCAHPS had become outdated. Specifically, the survey does not account for the widespread tilt toward value-based care, advances in technology, and the changes in patient preferences and priorities coming as a part of healthcare consumerism.
Suggested improvements fell into two main buckets.
Foremost, restructuring HCAHPS items to be more specific will lead to more precise answers for patients, ultimately gleaning the information that hospitals and CMS truly want.
When questions are too broad, patients tend to interpret them within the context of their own circumstances – that is natural, the report authors noted. Revamping survey items to both account for various social experiences and address the key measure being evaluated by a question will make data more actionable.
Second, HCAHPS developers may consider the way in which the SDOH impact a patient’s ability to actually fill out the survey. Health literacy levels and the reading level of translated HCAHPS materials are too high, patient experience leaders reported. Newer iterations of the HCAHPS survey need to lower the health literacy level based on a nationwide health literacy level assessment, the patient experience leaders recommended.
Source: Patient Engagement Hit