Based Indicators Help Module 07 Written Assignme

Based Indicators Help Module 07 Written Assignme

Your hospital’s quality scorecard shows that the facility had a lower rate of compliance for the following quality measures compared to other hospitals in the area:

  • Percent of heart failure patients with left ventricular systolic dysfunction (LVSD) who are prescribed an ACEI or ARB at hospital discharge
  • Number of acute myocardial infarction (MI) patients who are prescribed a beta-blocker at hospital discharge
  • Percentage of ischemic stroke patients administered (given) anti-thrombotic therapy by the end of hospital day 2

Hospital administration wants the performance improvement team to research the possible causes for these rates and to develop some potential solutions that will help improve compliance. Select the components of the processes evaluated in these measures that you think may affect compliance. (You may want to review the measure definitions and content at www.qualitymeasures.ahrq.gov which explain the processes and outcomes involved). Then, develop one or two ideas for solutions for each measure, such as educating staff or changing a workflow.

Using APA format, write a 3-4 page proposal to the performance improvement team that details the clinical and administrative processes which you believe are involved that the team should address in creating an improvement plan. Be sure to identify the clinical and administrative data that will be needed to analyze processes and determine how they affect outcomes (mortality).

CHAPTER SEVEN Integrating Data for Operational Success

CHAPTER 7

Today’s methods of reimbursement by the government and by health maintenance organizations (HMOs) have compelled senior leadership of health care organizations to focus on key operational variables in planning their organizations’ budgets. With the help of the finance department, leadership has come to understand that information about length of stay (LOS), patient throughput, discharge disposition, admission criteria, operating room (OR) turnaround time, and resource consumption has critical financial implications.

Therefore the CEO and senior administrators need to monitor these variables in order to maintain a reasonable budget. Typically, the chief financial officer, through the budget process, reports on these operational and clinical variables without reference to the specifics of patient care or of quality. Quality is considered to be separate from a sound budget, but this distinction between financial and quality information makes it difficult for leadership to identify problems and improve hospital operations. Ideally, the quality management department should provide reports of clinical and operational variables that combine with financial reports to produce the hospital’s budget. A combined report would be especially useful because the government requires hospitals to report quality data for accreditation and higher reimbursement.

In this chapter I will offer examples that reinforce the relationship between maintaining quality standards and gaining operational and financial efficiencies. I will illustrate that when leadership is committed to improving quality, and uses measures and quality data to monitor processes and develop improvements, the organization benefits financially.

DIFFERENT DATA TELL DIFFERENT STORIES ABOUT CARE

Report cards are one useful way to translate the experiences of individual patients into a collective representation of the delivery of care. With the aggregated data of report cards, analysis can move from patients’ responses to the question, “How do you feel?” to information about the probability of recovering from a specific procedure or disease in comparison to a similar patient at a comparable institution. That’s a big leap. The report card reflects what it means to have specific treatments at particular places.

Report cards compare hospitals; they force organizations to measure themselves against other similar organizations and against a gold standard (evidence-based medicine). Administrators should take the data from these report cards very seriously and use this information to communicate with the medical staff. Together, administrators and clinical staff can figure out why the data reflect what they do. If the data show that outcomes are poor for certain procedures, administrators should ask the clinicians to examine their processes and explain what led to the poor results. When the administrative approach is analytical rather than confrontational, processes can be examined and improved.

Individuals involved with public health and health care administration should examine the different kinds of report cards so that they can evaluate them. To properly evaluate report card information and use it effectively, administrators should know the source of the data, the reason for the collection, and the intended audience.

WORKING WITH ADMINISTRATIVE DATA

Health care administrative data are readily available, a matter of public record. Because these data are collected for financial reimbursement, they are descriptive, revealing what was done and to whom, but not how or why. It is precisely because these data are administrative that physicians tend to ignore the reports that are generated from them. They know that the data are not subtle enough to accurately represent clinical care.

Physicians often express reluctance to use administrative data to change practice because these data are derived from billing forms, which may not accurately reflect severity of illness or comorbidities and which are entered by clerical personnel, of varying skill, who lack clinical knowledge for interpreting the significance of various diagnoses. It is, for example, difficult to differentiate from the database between comorbidities that might have helped precipitate a stroke (pneumonia or acute myocardial infarction) and those that might have been a complication of the stroke. The only outcome in administrative data is death or survival. However, even if the measure is invalid, when it gets published that your hospital has the highest mortality or infection or complication rate in the state for a specific diagnosis or procedure, it may reveal some problem about the delivery of care. Certainly, such a report will create a public relations issue.

Administrative data, because not clinically motivated, can be insensitive to important aspects of hospital care. For example, if a small community hospital does not have the capability to perform complex cardiac procedures, such as cardiac catheterizations, patients who require those procedures are transferred to a hospital that is equipped to perform those procedures. However, those patients who are inappropriate for transfer because they are too ill or are in the end-of-life stage of care remain at the small hospital. Therefore, when administrative data are collected, it appears as if the community hospital has a very high mortality rate for cardiac patients, with the implication that the hospital is providing very poor care. The actual situation cannot be captured by these data.

In this particular case those hospitals that were receiving very poor report cards due to this problem complained, and the model was changed to account for patient transfers. Unsurprisingly, when the model changed, the results changed. Unfortunately, once a hospital is labeled as “bad,” it is difficult for it to say that the data are wrong. Even if methodologically flawed, the public who read the reports don’t realize it and react.

Large purchasers of health insurance, such as General Electric and Ford Motor Company, have access to hospitals’ administrative data and hire analysts to develop models from these data to determine the expected mortality for particular diagnoses. The companies then can compare hospitals. The models contain such demographic information as age, gender, comorbid conditions, geographical region, diagnosis, and outcome (mortality). The analysis can point out that one hospital has a better performance than another and that the likelihood of dying at one is a certain percentage higher than it is at another. On the basis of these data, the companies can recommend hospitals to their employees. If an employee goes for a procedure and has complications, a long LOS, or an infection or if he or she requires extensive nursing home care or rehabilitation, it costs the purchaser more money than more effective and efficient care would. Therefore, even though these reports are based on administrative data, they have a financial impact on the industry.

WORKING WITH PRIMARY DATA

Primary data, unlike administrative data, are more clinically oriented. Primary data are recorded by physicians and nurses, not by financial coders. When primary data are coupled with evidence-based medicine, the resulting information can be used to examine the cause and effect relationship between treatment and outcome. This relationship is critical for administrators and business leaders to understand. From a business point of view, understanding the profitability of certain procedures and operations can lead to increasing the profit margin of the institution. In order to understand the impact that clinical care has on institutional operations, it is necessary to examine that care in the aggregate. Once care can be explained intelligently and objectively, leadership can take appropriate steps to create a suitable environment and to develop rational, data-driven approaches to care. Once leadership understands the care delivered at the institution, resources can be spent more appropriately.

If data are to be used to improve care, then these data should be primary, collected for the express purpose of understanding the delivery of care. When working with administrative data, you can correlate variables from the database, but when working with primary data, you can actually make certain assumptions and collect data that confirm or deny those assumptions.

For example, the State of New York began collecting primary data with the objective of building a scientific report card that could be refined over time and respond to new information. By collecting primary data about specific diseases, the state forced clinicians to document specific data points in the medical record. From this information a model was developed by a task force of experts from around the state that attempted to explain why people died, not simply encode that they did die. Unlike the secondary data from the administrative databases, this model had some explanatory power.

Report cards, ideally, should be used to discover what can be done to improve care and to evaluate the impact of particular treatments on outcomes. Unlike administrative data, which are not suited to analyzing medical practice, the data from the Centers for Medicare and Medicaid Services (CMS) can pinpoint the population the CMS wishes to study and whose health the agency wants to improve. Analysts determine the specific population and set elaborate exclusion criteria so that the population can be reliably compared. For example, people who suffer heart attacks have all kinds of other issues. One instance of this is that research shows that people who are on dialysis and who require a coronary artery bypass graft (CABG) have a greater risk of dying than other CABG patients do. Therefore, in studies of cardiac mortality among CABG patients, recently dialyzed patients are excluded from the mortality data. This is one of the great advantages of using primary data—the measures can be revised and refined as information leads to increased knowledge.

The CMS uses primary data to assess and improve care; the core measures reflect the assumptions that were made. The assumptions are not plucked from thin air, of course, but based on the evidence of experts and on research.

CASE EXAMPLE: STROKE

Stroke is estimated to affect three-quarters of a million Americans annually and is the third leading cause of death in the United States. Research has not compelled specific treatments for stroke, and therefore there is tremendous variability in how stroke patients are managed. Administrative data are used by external drivers of quality to rank hospitals according to outcomes for stroke. Hospitals with poor outcomes protest the shortcomings of the administrative databases and express the feeling that even the risk-adjusted data cannot possibly address the differences in patient populations among hospitals.

However, administrative data can provoke a useful discussion about care. At one of the hospitals in our health system, risk-adjusted stroke mortality was publicly reported to be higher than the rate found in the rest of the state over a four-year period. The appropriate question to ask when confronted with such data year after year is why? Without the aggregated external data, no single physician or administrator would even have known that mortality was high, never mind inquiring into the cause. When asked about the poor ratings, the physicians said that stroke patients are often elderly and very sick and that dying is a normal consequence of the illness. Perhaps it was possible that this hospital’s patient population was consistently sicker than the population at other hospitals, but even with that assumption care processes could be examined. If this assumption were correct, we should have been able to verify it easily by examining the medical records (primary data) of those elderly patients with stroke who died. Which is exactly what we did.

A multidisciplinary committee analyzed administrative data from nine health system hospitals that admitted a significant number of stroke patients each year, and created a statistical model similar to the approach used in establishing hospital ratings. Our quality management department maintains a database for all system hospitals. For each patient discharged the data include the diagnosis (or more precisely, diagnosis related group, or DRG), ICD-9 codes for comorbidities and complications, various procedure codes, and demographic information. This database allows the physician to review his or her patient population at a glance.

The CMS and two other governmental departments, the National Center for Health Statistics (NCHS) and the Department of Health and Human Services, have set guidelines for classifying and coding health status according to diagnosis and procedure using the International Classification of Diseases, 9th Revision, often referred to as the ICD-9 codes. These guidelines for coding have been approved by the American Hospital Association, American Health Information Management Association, CMS, and NCHS. The goal of the coding guidelines is to maintain consistency and completeness in the medical record.

A regression procedure was used to derive a model to predict death based on premorbid factors. This model described a clinical phenomenon that can be used by clinicians to understand their patient population in terms of factors contributing to patient death. An explanation of death per patient is very different from an explanation of death per population. A single death may be interpreted as part of the accepted statistics for the patient’s disease, but data analysis of a patient population can provide some clues as to causes and or relationships between complication and death.

Because the departments of neurology and quality management took seriously the administrative data that described stroke mortality as greater at our hospital than at other hospitals across the state, the multidisciplinary committee examined primary data from the medical record as well, in order to delve deeper in the effort to discover the reason for the mortality. The clinicians reviewed a sample of the charts, matching all stroke patients who died to a randomly selected stroke patient from the same hospital from the same year who survived.

The first step was to confirm whether the administrative data were accurate and reliable. When the charts were reviewed by a neurologist to determine if the diagnosis of stroke was appropriate, coding problems were exposed. In one hospital the miscode rate was quite high, and also the mortality among miscoded patients was higher than that among the correctly coded patients, thereby artificially increasing overall mortality for the stroke group. This is an important lesson. Because CMS and insurance companies use such administrative databases to compare hospitals’ mortality rates, and for reimbursement, proper coding is important.

Analysis identified seven variables that significantly predicted mortality—age, atrial fibrillation, congestive heart failure, dementia, intracerebral hemorrhage, diabetes mellitus, and anemia—and a formula was developed to calculate the probability of death. The relative risk of death was calculated for each stroke subtype, and then a measure was derived that provided an overall estimate of stroke severity in each hospital in the study.

In examining the medical records of those stroke patients who died, the team noticed that a high percentage had the secondary diagnosis of aspiration pneumonia. Stroke patients typically have trouble eating and swallowing, and the result is that the lungs can be affected. One of the ways to avoid aspiration pneumonia in stroke patients is to give them speech and swallowing therapy. The charts of the patients who died did not indicate that they had that therapy. By looking at the primary data—the medical record—important hypotheses could be made about improving care. The administrative data defined the problem, and the primary data analyzed its cause. The information led to changed practices (increased speech and swallowing therapy) and the mortality rating improved.

After the chart reviews and analysis the data were reviewed with neurologists from all the system hospitals, who agreed to collaborate in an effort to improve outcomes for stroke patients. Mortality was higher in bedridden patients receiving heparin and lower among patients receiving physical therapy, who then had less risk of deep vein thromboses. Blood sugar and blood pressure also were found to be important elements in positive outcomes. Reviewing the data in the aggregate, the group was able to develop consensus regarding an improved and standardized stroke treatment protocol.

OPERATIONAL DECISIONS AND QUALITY DATA

When the data reveal good processes and outcomes, it’s good for business. The case mix index (CMI), a number reflecting the complexity of treatment given a patient, is based on several clinical variables. Because the health care institution gets paid more for a higher CMI, financial and administrative departments have become familiar with analyzing case mix. Administrators track these variables over time, identifying the ratio of surgical to medical patients (the institution gets more reimbursement for surgical procedures), just as they track census information, which provides data on how many patients are admitted to and discharged from the hospital. Census information and variables related to CMI have operational implications for administrators. For example, information about how many patients are in the hospital and for what kinds of medical treatment can influence decisions about staffing ratios, space allocation for different departments, and technology purchases. Therefore clinical and financial variables together affect administrative choices about organizational operations.

The budget reflects the financial goals, priorities, and operations of the hospital. The CEO wants the budget to reflect the idea that the hospital is run efficiently and effectively and that patient outcomes are good. Unlike some other industries, there is a very low profit margin in health care. Hospitals can’t offer discounts. Regulations have to be followed, and the government controls how much money the institution gets paid. Therefore, expenses need to be monitored very closely and processes analyzed carefully. Administration needs appropriate weapons, in this case, the facts based on data, to explain to the governing body the workings of the hospital and how they have an impact on the profit margin.

The goal for administrators is to establish some criteria by which to judge whether certain investments are worthwhile and how different variables interact and are related to each other. Quality management can explain the process of care by identifying and reporting data on crucial variables that influence profit and loss. With quality data, administrators have tools at their disposal to balance clinical, operational, and budgetary issues.

Compliance with regulatory indicators is only one of the reasons to collect quality data. In addition to the numerical data and rankings released on public report cards, qualitative data are also reported in other forms. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO), for example, publishes a hospital scorecard that describes areas that require improvement. For example, the public may be informed through the JCAHO report that patient assessment at a hospital is poor or that a hospital is using inappropriate and dangerous abbreviations when ordering medication. These data are evaluative. The more areas that “need improvement,” the more vulnerable the hospital is to having accreditation problems, because this evaluation suggests poor processes and inadequate facilities, oversight, and operations.

QUALITY AND RISK

Quality management data can help administrators prioritize resource allocation through identifying risk factors in various processes. Quality management methodology analyzes processes proactively, using the failure mode and effects analysis (FMEA), and retrospectively, using root cause analysis (RCA), to find gaps in care that can cause adverse events that are costly from a patient care and organizational point of view. The FMEA, required by the JCAHO as a safety precaution, analyzes the process of care with the goal of identifying the likelihood of a particular process failure and attempts to locate the risk points in a process. Once gaps are found, the multidisciplinary team conducting the analysis estimates the relative harm of that potential error and determines a criticality index, which ranks the most severe consequences of a failure in the process. Together, information about the probability of failure and information about the consequences of failure can guide improvement efforts. The end product of the analysis is an action plan to improve the potential problem.

The close relationship between risk management and quality management can lead to positive relationships between insurance companies and the hospital, because risk management uses quality variables to determine favorable rates in negotiating with insurance companies. Insurance companies are very interested in knowing, prior to giving coverage, how the hospital handles high-risk procedures. Also, having a good quality measure and a good method of root cause analysis can lead to pretrial negotiation that reduces high costs in malpractice cases. Once best practices have been established, staff can be educated about positive clinical outcomes and the associated reduction of malpractice cases. When quality measures are taken into consideration as part of risk-management decisions, the financial results are excellent.

For example, because medication errors are so commonplace, hospitals’ insurance rates can be extremely high. In our system we were able to show our insurers that we had a deliberate process to monitor potential gaps in the delivery of care. One of the improvements in the medication delivery process, for instance, was to have nurses perform a read-back of verbal orders, in order to minimize mishearings and misinterpretations. Data showed that nurses were complying with this safety improvement and that due to this preventive process, errors were reduced. Risk management confirmed the improvement efforts with data that showed that malpractice claims for these errors were reduced. Insurers were then confident that our improvement process was deliberate and that the reduced number of errors was not the result of chance but of careful oversight. Premiums were reduced.

Using measurements and quality management methodology creates efficiency and effectiveness. Proactive safety analysis can reduce malpractice claims and illustrate to insurance carriers that the organization has processes and checks and balances to keep patients safe from harm. This is what our system was able to do with the bariatric surgery protocols, and we gained favorable premiums. Insurers saw that our system anticipated problems and was providing solutions and that the organization was doing its utmost to avoid costly harm to patients.

When there are budget problems, finance often considers staff as an expense. But when there is a high case mix and specialized staff are required for care, then a larger staff might improve patient safety through reducing infection, decubiti, falls, errors, and mortality. Staffing then is not an expense but a way to save unnecessary expenditure that result from problems. The intensive care unit (ICU) may need to have a 1:1 patient-to-staff ratio, but also it may not. With quality data available the acuity of patients can be analyzed accurately and the need for staff assessed. Whereas finance may be able to report on staffing as an expense, quality management may be able to explain the relationship between the care provided by staff and the patient outcome.

When administrators make decisions about prioritizing expenses, it is important that they not cut crucial services that might protect patients from harm. It is the most seriously ill and therefore most vulnerable patients who require the most expensive equipment and highest staffing ratios. Complex patient problems require highly technical equipment and monitoring and the very good management that results in a well-coordinated clinical or nonclinical service. When patient safety is compromised, it is very bad for the organization. When patients die unnecessarily, it is expensive. Mortality reviews require resources, and the bad publicity that accompanies poor outcomes reduces patient volume. The administrator has to understand the concept of unnecessary death (death that might have been prevented by, for example, improving processes to reduce or prevent infection) and be aware of the variables that can be monitored to reduce or eliminate such events.

CASE EXAMPLE: FMEA AND BLOOD TRANSFUSIONS

Hospitals monitor blood transfusions carefully because they are high-risk and complex processes and the consequences of an error, such as delivering the wrong blood to the wrong patient, can be serious, even fatal. Because such great harm can occur, if proactive analysis can prevent an incident that is certainly desirable. Blood transfusion errors occur because the process involves multiple steps and many people, departments, and activities. The FMEA identifies and examines every step in the process.

Think of what is involved. A physician determines that a patient requires a transfusion and writes the order. A nurse draws blood from the correct patient and labels the blood sample with the patient’s name. Although this is the first step in a complex process, even here errors can and do occur. In one reported instance a nurse drew the blood from the correct patient, but the vial for the sample did not have a label and she put the blood in her pocket for future labeling. Hospital policy dictated that blood must be labeled at the patient’s bedside, but this policy was ignored. The nurse got busy, drew blood from another patient, labeled the wrong blood with the wrong name, and an incident occurred. Such a simple thing—not having a label or a pen—can cause a serious problem.

Between the blood draw and the final blood administration there are multiple steps, with each step susceptible to error. Once the blood is drawn and correctly labeled, someone has to transport it to the lab, where the blood has to be accurately analyzed for type and other elements. Once the patient’s blood is properly identified, the matching blood for the transfusion is located at the lab and labeled for the appropriate patient. The policy is that someone from the patient’s floor is supposed to collect the blood and verify the accuracy of the blood type and the patient. Once the blood is on the floor, the policy is that two clinicians have to verify that indeed the correct blood type is being administered to the correct patient. This is a double verification process to ensure accuracy. However, there are instances when policy is not followed, and one nurse may say she “trusted” the other to do the proper verification. Processes and policies are often not followed to the letter, sometimes with tragic consequences.

Transfusion errors are so devastating and yet so common that JCAHO has communicated some suggestions to help hospitals reduce the risk of making such errors. These suggestions stress developing patient identification policies, perhaps with a unique identification band for transfusions, and double-checking blood verification procedures. They suggest discontinuing the common practice of processing multiple samples at one time, and redesigning the environment so that multiple samples are not stored in the same refrigerator.

Education is essential for policies and procedures to be internalized. When activities become routinized, people can easily shift to shortcuts. However, serious errors are often the consequence of trivial actions. Therefore, stressing conformity to the policies and procedures can save lives, protect patient safety, and improve organizational processes and resource management.

COMMUNICATING QUALITY DATA

Over a ten-year period our quality management department was able to educate key decision makers on the importance of various operational and clinical measures and to establish a matrix of reported data that translated bedside care into operations in a format that leaders could easily view and interpret.

All the measures that are reported out on this high-level quality report (see the sampling in Figure 7.1) have the same intention: to objectively monitor, assess, and improve care and the operation of delivering care. Further, the measures appear in sets that communicate complex information, increase awareness of problems and successes in the delivery of care, and promote accountability, because care is objectively and continuously monitored over time.

Figure 7.1. Four Examples of the Sets of Measures Reported in an Executive Summary, September 2005.

These measures are reported throughout the organization, to the medical boards, the performance improvement committees, the board of trustees, the CEO, and the chief operating officer. A graphic at the top of each set of measures, a small pie chart, conveniently encodes how the organization is doing on that cluster. The measures in Figure 7.1 reflect a great deal of high-level information about various aspects of care. At a glance the sections of the pie charts distinguish whether the reported indicators are better than, the same as, or worse than the established benchmark. Indicators for the high-risk and resource-intensive environments of the emergency department (ED) and ICU are compared across five hospitals. The data show that only one hospital (E) had a good (that is, low) rate of returns to the ED within seventy-two hours. Two hospitals (B, E) showed improved (lower) self-extubation rates. These executive reports enable an administrator to quickly grasp where there are successes and where improvement is needed.

Busy administrators want to get the picture quickly and learn how poor performance can be improved. From this display, leadership can see, for example, that for patient safety measures, the system is doing poorly, with half the pie chart coded white, indicating performance below the established benchmark. With this form of graphic display, a problem needing a solution can be quickly identified.

The goal of communicating data is always to improve, and these and similar measures provide a good starting point for a discussion of how to accomplish this. When multiple hospitals are compared, as we do monthly across our health system, best practices can be identified and shared with others. If one hospital’s data show that its pressure ulcer rate is better than the established benchmark (Hospital B or E), for example, it can share its experience with improving the delivery of care with others. Hospitals whose rate is poor, that is, below the benchmark, can ask questions and do a root cause analysis of their processes. These reports are continuous because once improvements are implemented, it is essential to maintain vigilance.

Tracking excess days is a useful quality variable that reflects both efficient and inefficient practices that have a direct impact on the financial health of the organization. Therefore quality management aggregates these data and reports excess days on the executive summary. In order to receive appropriate reimbursement, the organization tries to match the benchmark established by the CMS for specific procedures or diseases. When utilization is appropriate, a low rate of excess days reveals efficiency in the delivery of care. When a high rate of excess days exists for certain procedures, that information may signal a problem, perhaps inefficient processes, problems in communication between different levels of staff or departments, or treatment that resulted in poor outcomes to patients, causing a prolonged and costly LOS. Any noteworthy changes in the data can target a problem area to be investigated, such as discharge planning, delay in treatment, or lack of communication.

When patients must return to the ED within a certain amount of time or make an unplanned return to the OR, resources have to be used that may not be reimbursed, because duplicate procedures are often not insured. When patients in the ED leave without being evaluated (LWBE) or against medical advice (AMA), then the care may not have been efficient. When the surgical site infection (SSI) rate is high, then care processes should be examined. The goal is to balance efficient processes with successful outcomes, and the best way to monitor this balance is through data. The intent of the measures is to capture relevant information that can be clearly illustrated and communicated. Because the measures are reported for specific time periods and tracked over time, problems and improvements can be easily seen. Measures are the best way to objectively assess care or evaluate the performance of caregivers.

From a purely operational point of view, unplanned returns to the OR are a serious problem, one that needs correction. They create a backlog in the OR, which has an impact on efficiency, preventing or delaying normally scheduled procedures. Recovery rooms and other units become taxed as well. When a patient requires a reoperation, insurance companies may feel that there was something wrong with the initial procedure and see it as a quality issue. Clusters of unplanned reoperations, linked either by procedure or physician, should be of particular interest to administrators because they will cost the hospital money and they suggest the existence of a process or competency problem. Only quality management data can provide administrators and leadership with an appropriate and accurate level of oversight. Administrators need to learn to use quality data to understand operations and make decisions.

Consider autopsies. An autopsy provides an accurate diagnosis about the patient’s condition and therefore about the physician’s accuracy of diagnosis and appropriateness of treatment. Recently, the media reported that several transplant patients died because their organ donor had had rabies, a fact entirely unknown to the physicians. The donor’s symptoms were congruent with a drug overdose, and a diagnosis of rabies was never considered. After the patients who received the infected organs died, there was an investigation and autopsies were performed, which is how the rabies was discovered. Autopsies tell the truth. However, there is no reimbursement for the procedure, and organizations don’t encourage it. Physicians may be happier thinking they did it right and may not want clear proof that their diagnosis or treatment was incorrect. But by choosing not to know, the organization puts itself in the way of an oncoming error. If there are problems, they are better identified than ignored, but that’s a tough position for an administrator to take. This is the reason that regulatory agencies recommend improving organizations’ autopsy rate.

Most clinical measures have operational analogues. Administrators can’t rely solely on physicians to understand the clinical processes that make an impact on operations. In our system the CEO clearly took the position that all care would be measured, reported, and communicated across the organization. His message was that he was not going to hide from being accountable if there were problems in services. Staff got the message. Measures, that is, the objective evaluation of care, would be used to assess competency. At every level of the organization, measures are used for evaluation and identification of problems, with the clear message that if you don’t know what’s broke, you can’t fix it.

The measures used were not pulled from thin air nor were they imposed on the organization by quality management. The measures were developed over time, with a great deal of multidisciplinary input from various stakeholders who knew the process and the potential for problems. Stakeholders are in a position to understand how measures can make an impact on their work.

For example, as a way to understand the efficiency and effectiveness of the ED, the measure LWBE (left without being evaluated) is collected. Clinicians need to know this measure because they are concerned about providing adequate patient care, and they don’t want poor processes. Staff have to be willing to use the measure to evaluate performance. Once that idea is accepted and socialized into the hospital culture, through consensus, the measure becomes an improvement tool, as well as one that promotes accountability. When benchmarks establish the goals for an organization, it is not easy to ignore what looks like poor outcomes. Consistency is also important, and tracking measures over time shows ups and downs that can be addressed. When care is analyzed openly, with the intent to understand and improve, rather than blame and shame, organizations prosper.

CASE EXAMPLE: DECUBITI

Tracking the incidence and severity of decubiti (skin pressure injuries) can function as a managerial tool, one that identifies a defect in the delivery of care. Therefore it is important for the medical board and for administrative and financial leadership to know the rate of pressure ulcers among their patient population and to ensure that treatment is standardized.

Patients with decubiti have a longer than anticipated LOS, and the costs associated with treatment and complications—pain, loss of limbs, infection, and even death—are high. Services related to treating pressure injuries may or may not be reimbursed by insurance. So decubiti should be monitored in the interests of the organization’s clinical, operational, and financial success. The decubiti rate can shed light on expenses related to such things as specialty beds, pharmacy (in the form of medicinal products), nursing performance and competence, and staffing ratios. Other operational and clinical issues involve the continuum of care, discharge disposition of patients, communication among staff, and patient outcomes such as sepsis and death.

In our multihospital system, data revealed that the rates of decubiti being reported varied across hospitals and were fluctuating dramatically every month. The lack of consistency made a comparative analysis difficult. One of the issues that the quality management department addressed was the disparity in care at different levels across the continuum. Care practices varied, for example, among the acute care phase of hospitalization, long-term care, and home care. Some patients came to the hospital from nursing homes with preexisting pressure injuries. The challenge was to implement a process that would standardize care wherever the patient interacted with the system, whether on a surgical floor or in a rehabilitation center.

Leadership supported a performance improvement initiative to improve and standardize care throughout the multiple facilities of the health care system. Using the PDCA methodology the quality management department engaged staff in sharing accountability for reducing the decubiti rate by focusing on data collection; the development of clinical guidelines for best practices; educational efforts for nurses, physical therapists, physicians, and nutritionists; and improved communication so that care of patients with decubiti would be standardized.

Quality management staff worked with nursing to develop a single definition that could be used across the system and to establish a method for data collection that would be consistent and valid. This was no easy task and took a year to formulate. There were many issues to be addressed. For example, in assessing the severity of a skin injury, one of the symptoms a nurse evaluates is redness. But how red does the skin have to be to be “red”? Also, in coding the number of injuries, if one patient has three separate skin ulcers, should that be counted as one or three? Until the staff met and started discussing the definition of the measure, these issues had gone largely unaddressed.

To eliminate idiosyncratic judgment our system determined to adopt and make use of an objective scale that gives clear guidelines on how to evaluate the severity of the injury. The scale was adopted for uniformity and completeness of risk assessment, and caregivers were trained in how to use it. It ranks and scores relevant factors that have an impact on pressure injuries, such as the patient’s mobility status (from completely limited to no impairment), nutritional status (from very poor to excellent), activity level (from bedfast to frequent walking), and so on. Reddened areas or skin breakdowns are also objectively assessed to determine whether the patient is not at risk or should be placed on the pressure ulcer protocol. Assessment is done daily and as needed to maintain optimal vigilance. However, simply adopting a scale doesn’t immediately ensure that it will be used properly, and educational programs were necessary to train the nurses appropriately.

A systemwide performance improvement committee was formed, spearheaded by a collaboration between the quality management and the materials management departments. The committee established guidelines for the prediction of, prevention of, and treatment for pressure injuries. Clinical pathways, called CareMaps, were revised to incorporate skin care protocols. For example, orthopedic patients who may be immobilized are at special risk for pressure injuries; therefore, in the CareMap for total hip replacement, special skin care interventions are listed, including consultations with a nutritionist and physical therapist.

The effort, expense, and time involved in developing appropriate measures were well worth it. Once a single, consistent measure was established, nursing leadership could compare hospitals, and progress could be assessed over time. Education on the decubiti measure helped to focus staff attention on a serious condition that had become somewhat peripheral to treatment. The documentation requirements also served to heighten awareness and improve assessment and treatment. When the system reported data that showed the decubiti rate was reduced, both in volume and severity, the public relations department was able to assure the public that our organization had a decubiti rate well below the national benchmark (see Figure 7.2).

There were other operational and financial benefits to improving care: specialty beds were used more efficiently and medication was streamlined and could be purchased less expensively. The committee discovered a great deal of variability in the skin care products used. Over 160 different products were in use in multiple facilities. Working with materials management staff, the committee streamlined the products to twenty-four, which helped to control costs. A set of performance measures was standardized across the system. These measures recorded whether a risk assessment was documented within twenty-four hours of admission and also recorded the severity and source of injury, topical treatments, and so on. Quality management established databases for reporting the measures, which improved accountability and communication and helped to identify areas of excellence and benchmarks for best practices.

Figure 7.2. Pressure Injuries, 1997–2004.

Quality management defined the methodology that resulted in the decubiti measure; reporting rates and comparing hospitals to a consistent standard helped to motivate action. Hospitals with good rates served as best practice models for those with poorer rates. If the rate of decubiti could be improved at one institution, the process could be duplicated at others.

The initiative successfully reduced the incidence and severity of pressure injuries across the system. LOS was also reduced. Due to the collaboration between quality management and materials management, there were purchasing benefits to the system, with a 24 percent cost reduction in specialty beds and skin care products. Products are now purchased based on quality standards rather than cost effectiveness alone. By standardizing the approach to assessment and by using clinical guidelines, standard definitions, and uniform tools, treatment methodologies, and products, the health system has reduced skin injuries and the expenses associated with them.

MEASURES TELL THE TRUTH

The dilemma facing every administrator is how much he or she wants to reveal about the truth and how much he or she wants to project a positive face to the community. These forces may be at odds. Quality management methodology leads to the truth; it is not a way to gloss over problems but rather to identify and improve them. It is precisely this issue of accountability that makes the use of measures so complex. It is hard to argue with the data, even though most often, if the results show poor performance, people react by criticizing the measures rather than their own performance. The ability to accept objective information, especially when it targets areas that require improvements, is a matter of culture. But it is most important not to hide from the facts, even if unpleasant. Administrative leaders need to understand the problems in the care they deliver and make appropriate interventions to correct problems. It is easy to look at poor report cards and criticize the measures used and say they don’t accurately reflect the delivery of care. However, if measures are valued as representations of the various aspects of care, they can be used for improvement.

Clinical staff and institutional administrators are frequently defensive when they see poor results on the table of measures. Interestingly, this is not the case for financial measures that reveal problem areas. For some reason, people can accept financial failures, even to the point of declaring bankruptcy, without becoming defensive. But the same people who can say that they are in a financially disastrous state won’t say that they have a clinical disaster on their hands. The health care community is sympathetic to financial problems but less so to clinical ones. Perhaps the reason that clinicians have trouble accepting poor performance revealed through measures is that their highest value is to “do no harm.” If the data show harm, this is very troubling.

Administrators also have to juggle their responsibilities between assuring the public that the hospital environment is safe for them and admitting to real problems that may result in a crisis. Generally, bad processes and poor outcomes eventually come to the public’s attention, and if there is any suggestion that there has been a cover-up about known problems, that doesn’t do the institution, or its administration, any favors.

SUMMARY

Quality data should be integrated into operational and financial decisions because these data

  • Provide information for long-term strategic planning.
  • Reveal information relevant to daily operations.
  • Are required by regulatory agencies for accreditation, compliance, and reimbursement.
  • Help the organization balance clinical, financial, and operational information.
  • Are publicized on the Web and reported through the media.
  • Can be used to evaluate and compare hospitals.
  • Define “good” care as compliance with evidence-based indicators.
  • Help administrators understand reimbursement.
  • Help administrators prioritize resource expenditure.
  • Relate seemingly unrelated variables and patient outcomes so administrators can better understand operations and expenses.
  • Communicate complex information to various interest groups.
  • Reflect institutional leadership’s values, goals, and philosophy.
  • Help staff evaluate their performance, promoting accountability.
  • Translate individuals’ experiences into an aggregated and collective representation of the delivery of care.
  • Draw on different sources, administrative and primary, to reflect important information about both patients and services.

Things to Think About

Your ED is typically overcrowded and busy. Data reveal that although the CMS requires that pneumonia patients receive an antibiotic within four hours of coming to the ED, these patients are receiving antibiotics between six and eight hours after arrival. As the administrator, what can you do?

  • How would you analyze the problem? What variables would you examine?
  • Which members of the professional staff would you call on to help you analyze and improve the situation? Why those staff and not others?
  • Who would be accountable for improvements?
  • How would improvements be measured?
  • How would the data be reported? Why in one format rather than another?