CARL TAYLOR
First, some background. I love maps, data and transparency.
In my work, particularly, in medical disaster response we use Google maps, Open Street Maps and a host of other tools, mostly free and open source to display data, identify needs and accelerate decision support.
So my DNA begins with strong support for the new Commonwealth Fund project entitled http://www.WhyNotTheBest.org as well as CMS and AHRQ projects seeking to create transparent quality information.
However, I think some caveats are needed along the way, not just in reference to these projects but in the whole concept of ranking, scoring, benchmarking and measuring hospitals. (Note I am limiting the first part of my comments to hospitals with a disclaimer that there are groups such as CTS and others who have done an outstanding job tracking outcomes data for a long time, as well as surgical programs using the VHA NSQIP reporting system).
When we use data for medical disasters we know the content and can verify it, and we know the context in which the data will be used by the customers, often first responders, government agencies or non-governmental organizations. Because content verification and contextual usage are keys to effective data mapping these traits must be present in any health care effort that seeks to promote more effective care and theoretically drive consumer decisions.
My first concern is that the data displayed with regard to hospitals are neither sufficiently verified by outside parties nor potentially accurate. This past year I was part of a group of volunteers working on a study with the Society of Actuaries and Milliman to quantify the cost of medical errors. It was clear to me that at least some data sources around hospital incident data potentially suffered from under reporting.
Now, interestingly enough, in our blame-first-sue-second-and-fix-third culture, I can’t imagine anyone being surprised that some adverse incidents somehow don’t find their way into a database. To test this theory, I searched the WhyNotTheBest web site for a hospital in Florida for their quality reporting for the management of cases of pneumonia. Sure enough the hospital (which I will not name) scored marvelously in the WhyNotTheBest.org web site, with a 99.51% compliance against a national average of 98.01%.
That number would come as a surprise to the family of Thelma (not her real name in an effort to be HIPAA compliant). Thelma was admitted to this hospital in mid July of this year with an upper respiratory infection and while in the care of this hospital from mid July until her death in August managed to acquire pressure ulcers, c.diff and VAP. Now you can argue perhaps she was in that .49% of cases or the data was too current to yet find its way into the reporting systems, but I will remain a skeptic (and by the way if the national average of compliance is 98% why are we keeping score on this anyway since there seems to be little or no comparison, but I digress).
A few years ago, doing a quality review from claims data for a commercial group, one hospital stood out as having a number of hospital acquired infections. In an effort to be helpful I approached the CEO of that hospital and suggested he had an infection control problem that required his attention. His response was “how do I change my coding so you can’t see these events.”
I raise this point not to suggest that we don’t want to know where the great, good and not so good hospitals are, but any data voluntarily submitted, not supported by independent claims audit, lab data and other documentation comes with a risk that a few hospitals will game the reporting. Plus what often drives errors isn’t process but culture. Dr. Mark Miller CMO of North Mississippi Medical Center recently gave a talk in which he pointed out that 74% of all airplane crashes occur when the pilots are working together for the first time. Study after study has shown errors occur when culture fails. Moreover, errors occur when routines are disrupted. Overly stressed surgery schedules such as adding one more case long after the team should have been off shift and picking up their kids from school lead to compromises and mistakes.
As a patient I really want to know how the team in the OR gets along, and what number case am I on the schedule. Now you may suggest that culture is embedded in the numbers or the patient satisfaction data collected by CMS and others. Perhaps, but if the public is going to be led to rely on this data, then the data sources must be broader and independently verified in much the same way that the FAA supervises airlines for maintenance and training. Moreover the data must lead to information and knowledge that actually means something to the patient in their hospital selection process.
To do otherwise leaves us where we are now, in which it seems every other billboard along the interstate is an ad for some hospital touting it is in the top 1% for the treatment of some disease according to some for-profit rating service or magazine. I am left with the opinion that every hospital in South Florida is in the top 1% so the worst hospitals must all be out of state, except of course when I travel I see the same billboards. I guess I can only hope that if an ambulance is ever rushing me to a hospital that they match my condition to a billboard.
Which leads me to my second concern, context or relevance. Why do we collect hospital data? Because we can. What do we use as benchmarks? Process and opinion. Is this relevant to a patient? Rarely.
At the risk of really sounding like the contrarian I am, is there real value to a patient learning more about how well a hospital did in meeting some process survey or taking out a magazine subscription or hiring a quality consulting firm? And what is the difference, if there is one, between a hospital that scored 99% and one that scored 97%. I am not sure patients know or care. We get so caught up in complexity, risk adjustments, severity scoring, marketing opportunities, and the numbers game that we run the risk of missing the whole point of what the consumer (patient) actually needs. We need to move beyond only using To Err is Human as our Bible and write a new book called To Be Ill Is Human.
What patients really want answered are questions like these: I have diabetes. Which doctor in my area has managed her patients to a stable hemoglobin A1C between 4-8?
Patients want outcomes not process. They want doctors that care, whose patients don’t wind up in the hospital, and who will work with them to produce better health management. They are less concerned with the how and more concerned with the what.
If we want to be meaningful in actually providing transparent data to patients to create some steerage towards more effective health care, then we must migrate from the inpatient health care setting to the primary care, internal medicine, non-invasive cardiologists doctors offices. And we must move there not to track process metrics like HEDIS, but outcomes metrics.
Interestingly enough we can do this. In Boston now managed care organizations have moved from pay for performance (read pay for process) to pay for outcomes. Granted the outcomes data is lab driven, but the lab data (or for that matter physiologic data like weight and blood pressure) is easy to access.
Lab data and physiologic data, I would suggest, are far more valuable than claims data or self reported errors (such as HICUP data) in measuring the true aspect of our interactions with the health system. Lab data and physiologic data represents a longitudinal scoring mechanism from which we can infer the effectiveness of physician care, successful use of medications and patient compliance with drug, diet and lifestyle requirements to promote effective health.
Outcomes data can be simple, understandable, actionable and meaningful. What I would like to see is a billboard on I-95 that says during the past year 95% of Dr. Smiths patients had no visits to the emergency room, reduced their BMI, held their blood pressure under 140/90, lost weight and became successful managers of their health. I would go to Dr. Smith based upon that data if Mapquest can find him.
The goal of quality identification is admirable, and the opportunity to use emerging mapping tools and web services to create transparency is a real step forward. But there is work to be done to more accurately identify what consumers really want answered and how best to access those data sources.
The first step is to move beyond the hospital and proceduralists and look more closely at opportunities to promote more effective care at the primary care office where most care occurs.
Carl Taylor is Assistant Dean of the University of South Alabama College of Medicine and Partner at the Fraser Institute for Health Research.