The article is basically about a recent study in which Vytorin reduced cholesterol measurably, but didn't do anything to slow the progression of heart disease. In it, the writer, a physician, muses how various agencies and insurance measures lots of things, but do these numbers really mean anything in quantifying quality of care?
Here are a few quotes that directly lead to some of my thoughts
"You can't judge the quality of a restaurant meal just by reading the grocery list. Likewise, you can't gauge the quality of a doctor's care by extrapolating from a few numbers plucked from the medical records of patients.The author goes on to name some measurements that might directly lead to quality of care, and then writes in conclusion:
"Still, managed care companies and quality watchdogs have long used cholesterol values as stand-ins for physicians' performance in preventing heart disease. And that's not unusual. If a doctor's performance is judged at all, it's usually based on lab test results and insurance claims data about treatments.
"But it seems to me that government and insurance overseers seem to be interested in measuring a lot of unconnected minutiae that isn't leading us to better quality.
"Some doctors object when their compensation is tied to treatment goals decided by outsiders. I don't fear pay for performance. I fear pay for performance for measures that don't really matter."
"We don't improve care by blindly following the numbers. We get better outcomes by involving patients in managing their diabetes. And as more people struggle with chronic diseases, the lessons of diabetics will apply to more and more patients.Those of you that know me know that I worked in software development for some 15 years. A similar debate goes on inside that industry. The question asked is, "What does quality software mean? What characteristics does it exhibit?" We place emphasis on the easy areas: Reduction in quantity of defects, no showstopper defects, acceptable measures of execution performance, etc. (The meaning of "defect," however, was subject to interpretation, particularly between developers and testers.) What managers, and particularly executive level management, cared about most was the count of defects, and particuarly severe ones, coming down until the software was shipped out for general use.
"As the quest for quality continues, we need to resist the urge to subjugate doctors to tracking minutiae at the expense of ignoring the outcomes that really matter."
There are many ways to massage such numbers. If severe defects couldn't be fixed and they were in an area that didn't need to absolutely ship, we might just clip it for now and try to ship it in a later release. Developers and testers might debate about the real seriousness of certain issues and try to get defects classified into a less serious category. Another solution might be "code and pray" where we do some shot-in-the-dark fixes, and then pray that the new code paths taken just might avoid whatever was causing the defect to appear. If the product was about to ship, press releases already on their way, marketing and sales already selling the product, customers expecting the product any day now, it wasn't unheard of to move everything to the "Postponed" column for an immediate hotfix or a soon-to-be-released service pack, just so we could say we made the date so that the company could book appropriate revenue before the end of the fiscal quarter. After all, the most important thing wasn't about meeting customer quality expectations, but about meeting shareholders' earnings-per-share expectations and projecting favorable revenues for the future quarters. (Yes, I'm jaded! Now you know one of the reasons I don't work in the software industry anymore.)
Our modern mindset wants to quantify everything. If there was some way of distilling all of life into a single, numerical measure, I'm sure we would do it. It wouldn't surprise me if someone was trying.
All this got me thinking about "church quality." How do we measure it? I don't know about the other denominations or congregational churches, but Adventists love to count attendance, baptisms, and tithe. If these are growing, the higher-ups are happy. If they're stagnant, this is reason for concern. And if they are going down, alarm bells go off.
Certainly these numbers are some indicators of church health. But how much do they really correlate to quality, of the congregation and of the pastor? And how accurate are they even in assessing church health?
I might suggest that in addition to these measurements, surveys of the surrounding neighborhood, community, and other churches in regards to the church, church members, and the church staff might actually provide a better picture: Do you view the church positively or negatively and why? Do you view the church as cooperative or antagonistic? In what ways are the church members and staff involved in your community? If the church were to go away, would you miss it or not? And so on.
The church is often called a hospital for the broken hearted (or sinners, if you really prefer that term). If the people that make up the church are the physicians, nurses, and other care-givers (we will set aside for the moment that we are also patients at the same time), then the WSJ article may be closer to home than we care to admit. And so I close my little essay by reiterating a few selects quotes from above and ask you to think about it in terms of your church and your ministry:
"You can't gauge the quality of a doctor's care by extrapolating from a few numbers plucked from the medical records of patients."
"I fear pay for performance for measures that don't really matter."
"We don't improve care by blindly following the numbers."
"As the quest for quality continues, we need to resist the urge to subjugate doctors to tracking minutiae at the expense of ignoring the outcomes that really matter."
No comments:
Post a Comment