Measuring healthcare quality (2024)

3.1. Introduction

The field of quality measurement in healthcare has developed considerably in the past few decades and has attracted growing interest among researchers, policy-makers and the general public (Papanicolas & Smith, 2013; EC, 2016; OECD, 2019). Researchers and policy-makers are increasingly seeking to develop more systematic ways of measuring and benchmarking quality of care of different providers. Quality of care is now systematically reported as part of overall health system performance reports in many countries, including Australia, Belgium, Canada, Italy, Mexico, Spain, the Netherlands, and most Nordic countries. At the same time, international efforts in comparing and benchmarking quality of care across countries are mounting. The Organisation for Economic Co-operation and Development (OECD) and the EU Commission have both expanded their efforts at assessing and comparing healthcare quality internationally (Carinci et al., 2015; EC, 2016). Furthermore, a growing focus on value-based healthcare (Porter, 2010) has sparked renewed interest in the standardization of measurement of outcomes (ICHOM, 2019), and notably the measurement of patient-reported outcomes has gained momentum (OECD, 2019).

The increasing interest in quality measurement has been accompanied and supported by the growing ability to measure and analyse quality of care, driven, amongst others, by significant changes in information technology and associated advances in measurement methodology. National policy-makers recognize that without measurement it is difficult to assure high quality of service provision in a country, as it is impossible to identify good and bad providers or good and bad practitioners without reliable information about quality of care. Measuring quality of care is important for a range of different stakeholders within healthcare systems, and it builds the basis for numerous quality assurance and improvement strategies discussed in Part II of this book. In particular, accreditation and certification (seeChapter 8), audit and feedback (seeChapter 10), public reporting (seeChapter 13) and pay for quality (seeChapter 14) rely heavily on the availability of reliable information about the quality of care provided by different providers and/or professionals. Common to all strategies in Part II is that without robust measurement of quality, it is impossible to determine the extent to which new regulations or quality improvement interventions actually work and improve quality as expected, or if there are also adverse effects related to these changes.

This chapter presents different approaches, frameworks and data sources used in quality measurement as well as methodological challenges, such as risk-adjustment, that need to be considered when making inferences about quality measures. In line with the focus of this book (seeChapter 1), the chapter focuses on measuring quality of healthcare services, i.e. on the quality dimensions of effectiveness, patient safety and patient-centredness. Other dimensions of health system performance, such as accessibility and efficiency, are not covered in this chapter as they are the focus of other volumes about health system performance assessment (see, for example, Smith et al., 2009; Papanicolas & Smith, 2013; Cylus, Papanicolas & Smith, 2016). The chapter also provides examples of quality measurement systems in place in different countries. An overview of the history of quality measurement (with a focus on the United States) is given in Marjoua & Bozic (2012). Overviews of measurement challenges related to international comparisons are provided by Forde, Morgan & Klazinga (2013) and Papanicolas & Smith (2013).

3.2. How can quality be measured? From a concept of quality to quality indicators

Most quality measurement initiatives are concerned with the development and assessment of quality indicators (Lawrence & Olesen, 1997; Mainz, 2003; EC, 2016). Therefore, it is useful to step back and reflect on the idea of an indicator more generally. In the social sciences, an indicator is defined as “a quantitative measure that provides information about a variable that is difficult to measure directly” (Calhoun, 2002). Obviously, quality of care is difficult to measure directly because it is a theoretical concept that can encompass different aspects depending on the exact definition and the context of measurement.

Chapter 1 has defined quality of care as “the degree to which health services for individuals and populations are effective, safe and people-centred”. However, the chapter also highlighted that there is considerable confusion about the concept of quality because different institutions and people often mean different things when using it. To a certain degree, this is inevitable and even desirable because quality of care does mean different things in different contexts. However, this context dependency also makes clarity about the exact conceptualization of quality in a particular setting particularly important, before measurement can be initiated.

In line with the definition of quality in this book, quality indicators are defined as quantitative measures that provide information about the effectiveness, safety and/or people-centredness of care. Of course, numerous other definitions of quality indicators are possible (Mainz, 2003; Lawrence & Olesen, 1997). In addition, some institutions, such as the National Quality Forum (NQF) in the USA, use the term quality measure instead of quality indicator. Other institutions, such as the NHS Indicator Methodology and Assurance Service and the German Institute for Quality Assurance and Transparency in Health Care (IQTIG), define further attributes of quality indicators (IQTIG, 2018; NHS Digital, 2019a). According to these definitions, quality indicators should provide:

  1. a quality goal, i.e. a clear statement about the intended goal or objective, for example, inpatient mortality of patients admitted with pneumonia should be as low as possible;

  2. a measurement concept, i.e. a specified method for data collection and calculation of the indicator, for example, the proportion of inpatients with a primary diagnosis of pneumonia who died during the inpatient stay; and

  3. an appraisal concept, i.e. a description of how a measure is expected to be used to judge quality, for example, if inpatient mortality is below 10%, this is considered to be good quality.

Often the terms measures and indicators are used interchangeably. However, it makes sense to reserve the term quality indicator for measures that are accompanied by an appraisal concept (IQTIG, 2018). This is because measures without an appraisal concept are unable to indicate whether measured values represent good or bad quality of care. For example, the readmission rate is a measure for the number of readmissions. However, it becomes a quality indicator if a threshold is defined that indicates “higher than normal” readmissions, which could, in turn, indicate poor quality of care. Another term that is frequently used interchangeably with quali ty indicator, in particular in the USA, is quality metric. However, a quality metric also does not necessarily define an appraisal concept, which could potentially distinguish it from an indicator. At the same time, the term qua l ity metric is sometimes used more broadly for an entire system that aims to evaluate quality of care using a range of indicators.

Operationalizing the theoretical concept of quality by translating it into a set of quality indicators requires a clear understanding of the purpose and context of measurement. Chapter 2 has introduced a five-lens framework for describing and classifying quality strategies. Several of these lenses are also useful for better understanding the different aspects and contexts that need to be taken into account when measuring healthcare quality. First, it is clear that different indicators are needed to assess the three dimensions of quality, i.e. effectiveness, safety and/or patient-centredness, because they relate to very different concepts, such as patient health, medical errors and patient satisfaction.

Secondly, quality measurement has to differ depending on the concerned function of the healthcare system, i.e. depending on whether one is aiming to measure quality in preventive, acute, chronic or palliative care. For example, changes in health outcomes due to preventive care will often be measurable only after a long time has elapsed, while they will be visible more quickly in the area of acute care. Thirdly, quality measurement will vary depending on the target of the quality measurement initiative, i.e. payers, provider organizations, professionals, technologies and/or patients. For example, in some contexts it might be useful to assess the quality of care received by all patients covered by different payer organizations (for example, different health insurers or regions) but more frequently quality measurement will focus on care provided by different provider organizations. In international comparisons, entire countries will constitute another level or target of measurement.

In addition, operationalizing quality for measurement will always require a focus on a limited set of quality aspects for a particular group of patients. For example, quality measurement may focus on patients with hip fracture treated in hospitals and define aspects of care that are related to effectiveness (for example, surgery performed within 24 hours of admission), safety (for example, anticoagulation to prevent thromboembolism), and/or patient-centredness of care (for example, patient was offered choice of spinal or general anaesthesia) (Voeten et al., 2018). However, again, the choice of indicators – and also potentially of different appraisal concepts for indicators used for the same quality aspects – will depend on the exact purpose of measurement.

3.3. Different purposes of quality measurement and users of quality information

It is useful to distinguish between two main purposes of quality measurement: The first purpose is to use quality measurement in quality assurance systems as a summative mec h anism for external accountability and verification. The second purpose is to use quality measurement as a formative mechanism for quality improvement. Depending on the purpose, quality measurement systems face different challenges with regard to indicators, data sources and the level of precision required.

Table 3.1 highlights the differences between quality assurance and quality improvement (Freeman, 2002; Gardner, Olney & Dickinson, 2018). Measurement for quality assurance and accountability is focused on identifying and overcoming problems with quality of care and assuring a sufficient level of quality across providers. Quality assurance is the focus of many external assessment strategies (see also Chapter 8), and providers of insufficient quality may ultimately lose their licence and be prohibited from providing care. Assuring accountability is one of the main purposes of public reporting initiatives (seeChapter 13), and measured quality of care may contribute to trust in healthcare services and allow patients to choose higher-quality providers.

Table 3.1

The purpose of quality measurement: quality assurance versus quality improvement.

Quality measurement for quality assurance and accountability makes summative judgements about the quality of care provided. The idea is that “real” differences will be detected as a result of the measurement initiative. Therefore, a high level of precision is necessary and advanced statistical techniques may need to be employed to make sure that detected differences between providers are “real” and attributable to provider performance. Otherwise, measurement will encounter significant justified resistance from providers because its potential consequences, such as losing the licence or losing patients to other providers, would be unfair. Appraisal concepts of indicators for quality assurance will usually focus on assuring a minimum quality of care and identifying poor-quality providers. However, if the purpose is to incentivize high quality of care through pay for quality initiatives, the appraisal concept will likely focus on identifying providers delivering excellent quality of care.

By contrast, measurement for quality improvement is change oriented and quality information is used at the local level to promote continuous efforts of providers to improve their performance. Indicators have to be actionable and hence are often more process oriented. When used for quality improvement, quality measurement does not necessarily need to be perfect because it is only informative. Other sources of data and local information are considered as well in order to provide context for measured quality of care. The results of quality measurement are only used to start discussions about quality differences and to motivate change in provider behaviour, for example, in audit and feedback initiatives (seeChapter 10). Freeman (2002) sums up the described differences between quality improvement and quality assurance as follows: “Quality improvement models use indicators to develop discussion further, assurance models use them to foreclose it.”

Different stakeholders in healthcare systems pursue different objectives and as a result they have different information needs (Smith et al., 2009; EC, 2016). For example, governments and regulators are usually focused on quality assurance and accountability. They use related information mostly to assure that the quality of care provided to patients is of a sufficient level to avoid harm – although they are clearly also interested in assuring a certain level of effectiveness. By contrast, providers and professionals are more interested in using quality information to enable quality improvement by identifying areas where they deviate from scientific standards or benchmarks, which point to possibilities for improvement (seeChapter 10). Finally, patients and citizens may demand quality information in order to be assured that adequate health services will be available in case of need and to be able to choose providers of good-quality care (seeChapter 13). The stakeholders and their purposes of quality measurement have, of course, an important influence on the selection of indicators and data needs (see below).

While the distinction between quality assurance and quality improvement is useful, the difference is not always clear-cut. First, from a societal perspective, quality assurance aims at stamping out poor-quality care and thus contributes to improving average quality of care. Secondly, proponents of several of the strategies that are included under quality assurance in Table 3.1, such as external assessment (see alsoChapter 8) or public reporting (see alsoChapter 13), in fact claim that these strategies do contribute to improving quality of care and assuring public trust in healthcare services. In fact, as pointed out in the relevant chapters, the rationale of external assessment and public reporting is that these strategies will lead to changes within organizations that will ultimately contribute to improving quality of care. Clearly, there also need to be incentives and/or motivations for change, i.e. while internal quality improvement processes often rely on professionalism, external accountability mechanisms seek to motivate through external incentives and disincentives – but this is beyond the scope of this chapter.

3.4. Types of quality indicators

There are many options for classifying different types of quality indicators (Mainz, 2003). One option is to distinguish between rate-based indicators and simple count-based indicators, usually used for rare “sentinel” events. Rate-based indicators are the more common form of indicators. They are expressed as proportions or rates with clearly defined numerators and denominators, for example, the proportion of hip fracture patients who receive antibiotic prophylaxis before surgery. Count-based indicators are often used for operationalizing the safety dimension of quality and they identify individual events that are intrinsically undesirable. Examples include “never events”, such as a foreign body left in during surgery or surgery on the wrong side of the body. If the measurement purpose is quality improvement, each individual event would trigger further analysis and investigation to avoid similar problems in the future.

Another option is to distinguish between generic and disease-specific indicators. Generic indicators measure aspects of care that are relevant to all patients. One example of a generic indicator is the proportion of patients who waited more than six hours in the emergency department. Disease-specific indicators are relevant only for patients with a particular diagnosis, such as the proportion of patients with lung cancer who are alive 30 days after surgery.

Yet other options relate to the different lenses of the framework presented in Chapter 2. Indicators can be classified depending on the dimension of quality that they assess, i.e. effectiveness, patient safety and/or patient-centredness (the first lens); and with regard to the assessed function of healthcare, i.e. prevention, acute, chronic and/or palliative care (the second lens). Furthermore, it is possible to distinguish between patient-based indicators and event-based indicators. Patient-based indicators are indicators that are developed based on data that are linked across settings, allowing the identification of the pathway of care provided to individual patients. Event-based indicators are related to a specific event, for example, a hospital admission.

However, the most frequently used framework for distinguishing between different types of quality indicators is Donabedian’s classification of structure, process and outcome indicators (Donabedian, 1980). Donabedian’s triad builds the fourth lens of the framework presented in Chapter 2. The idea is that the structures where health care is provided have an effect on the processes of care, which in turn will influence patient health outcomes. Table 3.2 provides some examples of structure, process and outcome indicators related to the different dimensions of quality.

Table 3.2

Examples of structure, process and outcome quality indicators for different dimensions of quality.

In general, structural quality indicators are used to assess the setting of care, such as the adequacy of facilities and equipment, staffing ratios, qualifications of medical staff and administrative structures. Structural indicators related to effectiveness include the availability of staff with an appropriate skill mix, while the availability of safe medicines and the volume of surgeries performed are considered to be more related to patient safety. Structural indicators for patient-centredness can include the organizational implementation of a patients’ rights charter or the availability of patient information. Although institutional structures are certainly important for providing high-quality care, it is often difficult to establish a clear link between structures and clinical processes or outcomes, which reduces, to a certain extent, the relevance of structural measures.

Process indicators are used to assess whether actions indicating high-quality care are undertaken during service provision. Ideally, process indicators are built on reliable scientific evidence that compliance with these indicators is related to better outcomes of care. Sometimes process indicators are developed on the basis of clinical guidelines (see alsoChapter 9) or some other golden standard. For example, a process indicator of effective care for AMI patients may assess if patients are given aspirin on arrival. A process indicator of safety in surgery may assess if a safety checklist is used during surgery, and process indicators for patient-centredness may analyse patient-reported experience measures (PREMs). Process measures account for the majority of most quality measurement frameworks (Cheng et al., 2014; Fujita, Moles & Chen, 2018; NQF, 2019a).

Finally, outcome indicators provide information about whether healthcare services help people stay alive and healthy. Outcome indicators are usually concrete and highly relevant to patients. For example, outcome indicators of effective ambulatory care include hospitalization rates for preventable conditions. Indicators of effective inpatient care for patients with acute myocardial infarction often include mortality rates within 30 days after admission, preferably calculated as a patient-based indicator (i.e. capturing deaths in any setting outside the hospital) and not as an event-based indicator (i.e. capturing death only within the hospital). Outcome indicators of patient safety may include complications of treatment, such as hospital acquired infections or foreign bodies left in during surgery. Outcome indicators of patient-centredness may assess patient satisfaction or patients’ willingness to recommend the hospital. Outcome indicators are increasingly used in quality measurement programmes, in particular in the USA, because they are of greater interest to patients and payers (Baker & Chassin, 2017).

3.5. Advantages and disadvantages of different types of indicators

Different types of indicators have their various strengths and weaknesses:

  • Generic indicators have the advantage that they assess aspects of healthcare quality that are relevant to all patients. Therefore, generic indicators are potentially meaningful for a greater audience of patients, payers and policy-makers.

  • Disease-specific indicators are better able to capture different aspects of healthcare quality that are relevant for improving patient care. In fact, most aspects of healthcare quality are disease-specific because effectiveness, safety and patient-centredness mean different things for different groups of diseases. For example, prescribing aspirin at discharge is an indicator of providing effective care for patients after acute myocardial infarction. However, if older patients are prescribed aspirin for extended periods of time without receiving gastro-protective medicines, this is an indicator of safety problems in primary care (NHS BSA, 2019).

Likewise, structure, process and outcome indicators each have their comparative strengths and weaknesses. These are summarized in Table 3.3. The strength of structural measures is that they are easily available, reportable and verifiable because structures are stable and easy to observe. However, the main weakness is that the link between structures and clinical processes or outcomes is often indirect and dependent on the actions of healthcare providers.

Table 3.3

Strengths and weaknesses of different types of indicators.

Process indicators are also measured relatively easily, and interpretation is often straightforward because there is often no need for risk-adjustment. In addition, poor performance on process indicators can be directly attributed to the actions of providers, thus giving clear indication for improvement, for example, by better adherence to clinical guidelines (Rubin, Pronovost & Diette, 2001). However, healthcare is complex and process indicators usually focus only on very specific procedures for a specific group of patients. Therefore, hundreds of indicators are needed to enable a comprehensive analysis of the quality of care provided by a professional or an institution. Relying only on a small set of process indicators carries the risk of distorting service provision towards a focus on measured areas of care while disregarding other (potentially more) important tasks that are harder to monitor.

Outcome indicators place the focus of quality assessments on the actual goals of service provision. Outcome indicators are often more meaningful to patients and policy-makers. The use of outcome indicators may also encourage innovations in service provision if these lead to better outcomes than following established processes of care. However, attributing health outcomes to the services provided by individual organizations or professionals is often difficult because outcomes are influenced by many factors outside the control of a provider (Lilford et al., 2004). In addition, outcomes may require a long time before they manifest themselves, which makes outcome measures more difficult to use for quality measurement (Donabedian, 1980). Furthermore, poor performance on outcome indicators does not necessarily provide direct indication for action as the outcomes may be related to a range of actions of different individuals who worked in a particular setting at a prior point in time.

3.6. Aggregating information in composite indicators

Given the complexity of healthcare provision and the wide range of relevant quality aspects, many quality measurement systems produce a large number of quality indicators. However, the availability of numerous different indicators may make it difficult for patients to select the best providers for their needs and for policy-makers to know whether overall quality of healthcare provision is improving. In addition, purchasers may struggle with identifying good-quality providers if they do not have a metric for aggregating conflicting results from different indicators. As a result, some users of quality information might base their decisions on only a few selected indicators that they understand, although these may not be the most important ones, and the information provided by many other relevant indicators will be lost (Goddard & Jacobs, 2009).

In response to these problems, many quality measurement initiatives have developed methods for combining different indicators into composite indicators or composite scores (Shwartz, Restuccia & Rosen, 2015). The use of composite indicators allows the aggregation of different aspects of quality into one measure to give a clearer picture of the overall quality of healthcare providers. The advantage is that the indicator summarizes information from a potentially wide range of individual indicators, thus providing a comprehensive assessment of quality. Composite indicators can serve many purposes: patients can select providers based on composite scores; hospital managers can use composite indicators to benchmark their hospitals against others, policy-makers can use composite indicators to assess progress over time, and researchers can use composite indicators for further analyses, for example, to identify factors associated with good quality of care. Table 3.4 summarizes some of the advantages and disadvantages of composite indicators.

The main disadvantages of composite indicators include that there are different (valid) options for aggregating individual indicators into composite indicators and that the methodological choices made during indicator construction will influence the measured performance. In addition, composite indicators may lead to simplistic conclusions and disguise serious failings in some dimensions. Furthermore, because of the influence of methodological choices on results, the selection of constituting indicators and weights could become the subject of political dispute. Finally, composite indicators do not allow the identification of specific problem areas and thus they need to be used in conjunction with individual quality indicators in order to enable quality improvement.

There are at least three important methodological choices that have to be made to construct a composite indicator. First, individual indicators have to be chosen to be combined in the composite indicator. Of course, the selection of indicators and the quality of chosen indicators will be decisive for the reliability of the overall composite indicator. Secondly, individual indicators have to be transformed into a common scale to enable aggregation. There are many methods available for this rescaling of the results, including ranking, normalizing (for example, using z-scores), calculating the proportion of the range of scores, and grouping scores into categories (for example, 5 stars) (Shwartz, Restuccia & Rosen, 2015). All of these methods have their comparative advantages and disadvantages and there is no consensus about which one should be used for the construction of composite indicators.

Thirdly, weights have to be attached to the individual indicators, which signal the relative importance of the different components of the composite indicator. Potentially, the ranking of providers can change dramatically depending on the weights given to individual indicators (Goddard & Jacobs, 2009). Again, several options exist. The most straightforward way is to use equal weights for every indicator but this is unlikely to reflect the relative importance of individual measures. Another option is to base the weights on expert judgement or preferences of the target audience. Further options include opportunity-based weighting, also called denominator-based weights because more weight is given to indicators for more prevalent conditions (for example, higher weights for diabetes-related indicators than for acromegaly-related indicators), and numerator-based weights which give more weight to indicators covering a larger number of events (for example, higher weight on medication interaction than on wrong-side surgery). Finally, yet another option is to use an all-or-none approach at the patient level, where a score of one is given only if all requirements for an individual patient have been met (for example, all five recommended pre-operative processes were performed).

Again, there is no clear guidance on how best to construct a composite indicator. However, what is important is that indicator construction is transparent and that methodological choices and rationales are clearly explained to facilitate understanding. Furthermore, different choices will provide different incentives for improvement and these need to be considered during composite construction.

3.7. Selection of indicators

A wide range of existing indicators is available that can form the basis for the development of new quality measurement initiatives. For example, the National Quality Forum (NQF) in the USA provides an online database with more than a thousand quality indicators that can be searched by type of indicator (structure, process, outcome), by clinical area (for example, dental, cancer or eye care), by target of measurement (for example, provider, payer, population), and by endorsement status (i.e. whether they meet the NQF’s measure evaluation criteria) (NQF, 2019a). The OECD Health Care Quality Indicator Project provides a list of 55 quality indicators for cross-country analyses of the quality of primary care, acute care and mental care, as well as patient safety and patient experiences (OECD HCQI, 2016). The Australian Commission on Safety and Quality in Health Care has developed a broad set of indicators for hospitals, primary care, patient safety and patient experience, among others (ACSQHC, 2019).

The English Quality and Outcomes Framework (QOF) includes 77 indicators for evaluating the quality of primary care (NHS Employers, 2018), and these indicators have inspired several other countries to develop their own quality indicators for primary care. The NHS also publishes indicators for the assessment of medication safety (NHS BSA, 2019). In addition, several recent reviews have summarized available quality indicators for different areas of care, for example, palliative care (Pfaff & Markaki, 2017), mental health (Parameswaran, Spaeth-Rublee & Alan Pincus, 2015), primary care for patients with serious mental illnesses (Kronenberg et al., 2017), cardiovascular care (Campbell et al., 2008), and for responsible use of medicines (Fujita, Moles & Chen, 2018). Different chapters in this book will refer to indicators as part of specific quality strategies such as public reporting (seeChapter 13).

In fact, there is a plethora of indicators that can potentially be used for measurement for the various purposes described previously (see section above: Different purposes of quality measurement and users of quality information). However, because data collection and analysis may consume considerable resources, and because quality measurement may have unintended consequences, initiatives have to carefully select (or newly develop) indicators based on the identified quality problem, the interested stakeholders and the purpose of measurement (Evans et al., 2009).

Quality measurement that aims to monitor and/or address problems related to specific diseases, for example, cardiovascular or gastrointestinal diseases, or particular groups of patients, for example, geriatric patients or paediatric patients, will likely require disease-specific indicators. By contrast, quality measurement aiming to address problems related to the organization of care (for example, waiting times in emergency departments), to specific providers (for example, falls during inpatient stays), or professionals (for example, insufficiently qualified personnel) will likely require generic indicators. Quality problems related to the effectiveness of care are likely to require rate-based disease-specific indicators, while safety problems are more likely to be addressed through (often generic) sentinel event indicators. Problems with regard to patient-centredness will likely require indicators based on patient surveys and expressed as rates.

The interested stakeholders and the purpose of measurement should determine the desired level of detail and the focus of measurement on structures, processes or outcomes. This is illustrated in Table 3.5, which summarizes the information needs of different stakeholders in relation to their different purposes. For example, governments responsible for assuring overall quality and accountability of healthcare service provision will require relatively few aggregated composite indicators, mostly of health outcomes, to monitor overall system level performance and to assure value for money. By contrast, provider organizations and professionals, which are mostly interested in quality improvement, are likely to demand a high number of disease-specific process indicators, which allows identification of areas for quality improvement.

Table 3.5

Information needs of health system stakeholders with regard to quality of care.

Another issue that needs to be considered when choosing quality indicators is the question of finding the right balance between coverage and practicality. Relying on only a few indicators causes some aspects of care quality to be neglected and potentially to distract attention away from non-measured areas. It may also be necessary to have more than one indicator for one quality aspect, for example, mortality, readmissions and a PREM. However, maintaining too many indicators will be expensive and impractical to use. Finally, the quality of quality indicators should be a determining factor in selecting indicators for measurement.

3.8. Quality of quality indicators

There are numerous guidelines and criteria available for evaluating the quality of quality indicators. In 2006 the OECD Health Care Quality Indicators Project published a list of criteria for the selection of quality indicators (Kelley & Hurst, 2006). A relatively widely used tool for the evaluation of quality indicators has been developed at the University of Amsterdam, the Appraisal of Indicators through Research and Evaluation (AIRE) instrument (de Koning, Burgers & Klazinga, 2007). The NQF in the USA has published its measure evaluation criteria, which form the basis for evaluations of the eligibility of quality indicators for endorsement (NQF, 2019b). In Germany yet another tool for the assessment of quality indicators – the QUALIFY instrument – was developed by the Federal Office for Quality Assurance (BQS) in 2007, and the Institute for Quality Assurance and Transparency in Health Care (IQTIG) defined a similar set of criteria in 2018 (IQTIG, 2018).

In general, the criteria defined by the different tools are quite similar but each tool adds certain aspects to the list. Box 3.1 summarizes the criteria defined by the various tools grouped along the dimensions of relevance, scientific soundness, feasibility and meaningfulness. The relevance of an indicator can be determined based on its effect on health or health expenditures, the importance that it has for the relevant stakeholders, the potential for improvement (for example, as determined by available evidence about practice variation), and the clarity of the purpose and the healthcare context for which the indicator was developed. The latter point is important because many of the following criteria are dependent on the specific purpose.

Box 3.1

Criteria for indicators.

For example, the desired level for the criteria of validity, sensitivity and specificity will differ depending on whether the purpose is external quality assurance or internal quality improvement. Similarly, if the purpose is to assure a minimum level of quality across all providers, the appraisal concept has to focus on minimum acceptable requirements, while it will have to distinguish between good and very good performers if the aim is to reward high-quality providers through a pay for quality approach (seeChapter 14).

Important aspects that need to be considered with regard to feasibility of measurement include whether previous experience exists with the use of the measure, whether the necessary information is available or can be collected in the required timeframe, whether the costs of measurement are acceptable, and whether the data will allow meaningful analyses for relevant subgroups of the population (for example, by socioeconomic status). Furthermore, the meaningfulness of the indicator is an important criterion, i.e. whether the indicator allows useful comparisons, whether the results are user-friendly for the target audience, and whether the distinction between high and low quality is meaningful for the target audience.

3.9. Data sources for measuring quality

Many different kinds of data are available that can potentially be used for quality measurement. The most often used data sources are administrative data, medical records of providers and data stored in different – often disease-specific – registers, such as cancer registers. In addition, surveys of patients or healthcare personnel can be useful to gain additional insights into particular dimensions of quality. Finally, other approaches, such as direct observation of a physician’s activities by a qualified colleague, are useful under specific conditions (for example, in a research context) but usually not possible for continuous measurement of quality.

There are many challenges with regard to the quality of the available data. These challenges can be categorized into four key aspects: (1) completeness, (2) comprehensiveness, (3) validity and (4) timeliness. Completeness means that the data properly include all patients with no missing cases. Comprehensiveness refers to whether the data contain all relevant variables needed for analysis, such as diagnosis codes, results of laboratory tests or procedures performed. Validity means that the data accurately reflect reality and are free of bias and errors. Finally, timeliness means that the data are available for use without considerable delay.

Data sources differ in their attributes and have different strengths and weaknesses, which are presented below and summarized in Table 3.6. The availability of data for research and quality measurement purposes differs substantially between countries. Some countries have more restrictive data privacy protection legislation in place, and also the possibility of linking different databases using unique personal identifiers is not available in all countries (Oderkirk, 2013; Mainz, Hess & Johnsen, 2019). Healthcare providers may also use patient data only for internal quality improvement purposes and prohibit transfer of data to external bodies. Nevertheless, with the increasing diffusion of IT technology in the form of electronic health records, administrative databases and clinical registries, opportunities of data linkage are increasing, potentially creating new and better options for quality measurement.

Table 3.6

Strengths and weaknesses of different data sources.

3.9.1. Administrative data

Administrative data are not primarily generated for quality or research purposes but by definition for administrative and management purposes (for example, billing data, routine documentation) and have the advantage of being readily available and easily accessible in electronic form. Healthcare providers, in particular hospitals, are usually mandated to maintain administrative records, which are used in many countries for quality measurement purposes. In addition, governments usually have registers of births and deaths that are potentially relevant for quality measurement but which are often not used by existing measurement systems.

Administrative discharge data from hospitals usually include a patient identifier, demographic information, primary and secondary diagnoses coded using the International Classification of Diseases (ICD), coded information about medical and surgical procedures, dates of services provided, provider identifiers and many other bits of information (Iezzoni, 2009).

However, more detailed clinical information on severity of disease (for example, available from lab test results) or information about functional impairment or quality of life are not available in administrative data. The strength of administrative data is that they are comprehensive and complete with few problems of missing data. The most important problem of administrative data is that they are generated by healthcare providers, usually for payment purposes. This means that coding may be influenced by the incentives of the payment system, and – once used for purposes of quality measurement – also by incentives attached to the measured quality of care.

3.9.2. Medical record data

Medical records contain the most in-depth clinical information and document the patient’s condition or problem, tests and treatments received and follow-up care. The completeness of medical record data varies greatly between and within countries and healthcare providers. Especially in primary care where the GP is familiar with the patient, proper documentation is often lacking. Also, if the patient changes provider during the treatment process and each provider keeps their own medical records, the different records would need to be combined to get a complete picture of the process (Steinwachs & Hughes, 2008).

Abstracting information from medical records can be expensive and time-consuming since medical records are rarely standardized. Another important aspect is to make sure that the information from medical records is gathered in a systematic way to avoid information bias. This can be done by defining clinical variables explicitly, writing detailed abstraction guidelines and training staff to maintain data quality. Medical record review is used mostly in internal quality improvement initiatives and research studies.

With the growth of electronic medical and electronic health records, the use of this data for more systematic quality measurement will likely increase in the future. The potential benefits of using electronic records are considerable as this may allow real-time routine analysis of the most detailed clinical information available, including information from imaging tests, prescriptions and pathology systems (Kannan et al., 2017). However, it will be necessary to address persisting challenges with regard to accuracy, completeness and comparability of the data collected in electronic records to enable reliable measurement of quality of care on the basis of this data (Chan et al., 2010).

3.9.3. Disease-specific registries

There are many disease-specific registries containing data that can be used for healthcare quality measurement purposes. Cancer registries exist in most developed countries and, while their main purpose is to register cancer cases and provide information on cancer incidence in their catchment area, the data can also be used for monitoring and evaluation of screening programmes and estimating cancer survival by follow-up of cancer patients (Bray & Parkin, 2009). In Scandinavian countries significant efforts have gone into standardizing cancer registries to enable cross-country comparability. Nevertheless, numerous differences persist with regard to registration routines and classification systems, which are important when comparing time trends in the Nordic countries (Pukkala et al., 2018).

In some countries there is a large number of clinical registries that are used for quality measurement. For example, in Sweden there are over a hundred clinical quality registries, which work on a voluntary basis as all patients must be informed and have the right to opt-out. These registries are mainly for specific diseases and they include disease-specific data, such as severity of disease at diagnosis, diagnostics and treatment, laboratory tests, patient-reported outcome measures, and other relevant factors such as body mass index, smoking status or medication. Most of the clinical registries focus on specialized care and are based on reporting from hospitals or specialized day care centres (Emilsson et al., 2015).

With increasing diffusion of electronic health records, it is possible to generate and feed disease-specific population registries based on electronic abstraction (Kannan et al., 2017). Potentially, this may significantly reduce the costs of data collection for registries. Furthermore, linking of data from different registries with other administrative data sources can increasingly be used to generate datasets that enable more profound analyses.

3.9.4. Survey data

Survey data are another widely used source of quality information. Surveys are the only option for gaining information about patient experiences with healthcare services and thus are an important source of information about patient-centredness of care. Substantial progress has been made over recent years to improve standardization of both patient-reported experience measures (PREMs) and patient-reported outcome measures (PROMs) in order to facilitate international comparability (Fujisawa & Klazinga, 2017).

Surveys of patient experiences capture the patients’ views on health service delivery (for example, communication with nurses and doctors, staff responsiveness, discharge and care coordination). Most OECD countries have developed at least one national survey measuring PREMs over the past decade or so (Fujisawa & Klazinga, 2017), and efforts are under way to further increase cooperation and collaboration to facilitate comparability (OECD, 2017).

Surveys of patient-reported outcomes capture the patient’s perspective on their health status (for example, symptoms, functioning, mental health). PROMs surveys can use generic tools (for example, the SF-36 or EQ-5D) or disease-specific tools, which are usually more sensitive to change (Fitzpatrick, 2009). The NHS in the United Kingdom requires all providers to report PROMs for two elective procedures: hip replacement and knee replacement. Both generic (EQ-5D and EQ VAS) and disease-specific (Oxford Hip Score, Oxford Knee Score and Aberdeen Varicose Vein Questionnaire) instruments are used (NHS Digital, 2019b).

Finally, several countries also use surveys of patient satisfaction in order to monitor provider performance. However, satisfaction is difficult to compare internationally because it is influenced by patients’ expectations about how they will be treated, which vary widely across countries and also within countries (Busse, 2012).

3.9.5. Direct observation

Direct observation is sometimes used for research purposes or as part of peer-review processes. Direct observation allows the study of clinical processes, such as the adherence to clinical guidelines and the availability of basic structures. Observation is normally considered to be too resource-intensive for continuous quality measurement. However, site visits and peer-reviews are often added to routine monitoring of secondary (administrative) data to investigate providers with unexplained variation in quality and to better understand the context where these data are produced.

3.10. Attribution and risk-adjustment

Two further conceptual and methodological considerations are essential when embarking on quality measurement or making use of quality data, in particular with regard to outcome indicators. Both are related to the question of responsibility for differences in measured quality of care or, in other words, related to the question of attributing causality to responsible agents (Terris & Aron, 2009). Ideally, quality measurement is based on indicators that have been purposefully developed to reflect the quality of care provided by individuals, teams, provider organizations (for example, hospitals) or other units of analysis (for example, networks, regions, countries) (see also above, Quality of quality indicators). However, many existing quality indicators do not reflect only the quality of care provided by the target of measurement but also a host of factors that are outside the direct control of an individual provider or provider organization.

For example, surgeon-specific mortality data for patients undergoing coronary artery bypass graft (CABG) have been publicly reported in England and several states of the USA for many years (Radford et al., 2015; Romano et al., 2011). Yet debate continues whether results actually reflect the individual surgeon’s quality of care or rather the quality of the wider hospital team (for example, including anaesthesia, intensive care unit quality) or the organization and management of the hospital (for example, the organization of resuscitation teams within hospitals) (Westaby et al., 2015). Nevertheless, with data released at the level of the surgeon, responsibility is publicly attributed to the individual and not to the organization.

Other examples where attributing causality and responsibility is difficult include outcome indicators defined using time periods (for example, 30-day mortality after hospitalization for ischemic stroke) because patients may be transferred between different providers and because measured quality will depend on care received after discharge. Similarly, attribution can be problematic for patients with chronic conditions, for example, attributing causality for hospitalizations of patients with heart failure – a quality indicator in the USA – is difficult because these patients may see numerous providers, such as one (or more) primary care physician(s) and specialists, for example, nephrologists and/or cardiologists.

What these examples illustrate is that attribution of quality differences to providers is difficult. However, it is important to accurately attribute causality because it is unfair to hold individuals or organizations accountable for factors outside their control. In addition, if responsibility is attributed incorrectly, quality improvement measures will be in vain, as they will miss the appropriate target. Therefore, when developing quality indicators, it is important that a causal pathway can be established between the agents under assessment and the outcome proposed as a quality measure. Furthermore, possible confounders, such as the influence of other providers or higher levels of the healthcare system on the outcome of interest, should be carefully explored in collaboration with relevant stakeholders (Terris & Aron, 2009).

Of course, many important confounders outside the control of providers have not yet been mentioned as the most important confounders are patient-level clinical factors and patient preferences. Prevalence of these factors may differ across patient populations and influence the outcomes of care. For example, severely ill patients or patients with multiple coexisting conditions are at risk of having worse outcomes than healthy individuals despite receiving high-quality care. Therefore, providers treating sicker patients are at risk of performing poorly on measured quality of care, in particular when measured through outcome indicators.

Risk-adjustment (sometimes called case-mix adjustment) aims to control for these differences (risk-factors) that would otherwise lead to biased results. Almost all outcome indicators require risk-adjustment to adjust for patient-level risk factors that are outside the control of providers. In addition, healthcare processes may be influenced by patients’ attitudes and perceptions, which should be taken into account for risk-adjustment of process indicators if relevant. Ideally, risk-adjustment assures that measured differences in the quality of care are not biased by differences in the underlying patient populations treated by different providers or in different regions.

An overview of potential patient (risk-) factors that may influence outcomes of care is presented in Table 3.7. Demographic characteristics (for example, age), clinical (for example, co-morbidities) and socioeconomic factors, health-related behaviours (for example, alcohol use, nutrition) and attitudes may potentially have an effect on outcomes of care. By controlling for these factors, risk-adjustment methods will produce estimates that are better comparable across individuals, provider organizations or other units of analysis.

Table 3.7

Potential patient risk-factors.

The field of risk-adjustment is developing rapidly and increasingly sophisticated methods are available for ensuring fair comparisons across providers, especially for conditions involving surgery, risk of death and post-operative complications (Iezzoni, 2009). Presentation of specific risk-adjustment methods is beyond the scope of this chapter but some general methods include direct and indirect standardization, multiple regression analysis and other statistical techniques. The selection of potential confounding factors needs to be done carefully, taking into account the ultimate purpose and use of the quality indicator that needs adjustment.

In fact, the choice of risk-adjustment factors is not a purely technical exercise but relies on assumptions that are often not clearly spelled out. For example, in several countries the hospital readmission rate is used as a quality indicator in pay for quality programmes (Kristensen, Bech & Quentin, 2015). If it is believed that age influences readmission rates in a way hospitals cannot affect, age should be included in the risk-adjustment formula. However, if it is thought that hospitals can influence elderly patients’ readmission rates by special discharge programmes for the elderly, age may not be considered a “risk” but rather an indicator for the hospitals to use for identifying patients with special needs. The same arguments apply also for socioeconomic status. On the one hand, there are good reasons to adjust for socioeconomic variables because patients living in poorer neighbourhoods tend to have higher readmission rates. On the other hand, including socioeconomic variables in a risk-adjustment formula would implicitly mean that it was acceptable for hospitals located in poorer areas to have more readmissions.

The assumptions and methodological choices made when selecting variables for risk-adjustment may have a powerful effect on risk-adjusted measured quality of care. Some critics (for example, Lilford et al., 2004) have argued that comparative outcome data should not be used externally to make judgements about quality of hospital care. More recent criticism of risk-adjustment methods has suggested that risk-adjustment methods of current quality measurement systems could be evaluated by assigning ranks similar to those used to rate the quality of evidence (Braithwaite, 2018). Accordingly, A-level risk-adjustment would adjust for all known causes of negative consequences that are beyond the control of clinicians yet influence outcomes. C-level risk-adjustment would fail to control for several important factors that cause negative consequences, while B-level risk-adjustment would be somewhere in between.

3.11. Conclusion

This chapter has introduced some basic concepts and methods for the measurement of healthcare quality and presented a number of related challenges. Many different stakeholders have varying needs for information on healthcare quality and the development of quality measurement systems should always take into account the purpose of measurement and the needs of different stakeholders. Quality measurement is important for quality assurance and accountability to make sure that providers are delivering good-quality care but they are also vital for quality improvement programmes to ensure that these interventions lead to increases in care quality.

The development and use of quality measures should always be fit-for-purpose. For example, outcome-based quality indicators, such as those used by the OECD, are useful for international comparisons or national agenda-setting but providers such as hospitals or health centres may need more specific indicators related to processes of care in order to enable quality improvement. The Donabedian framework of structure, process and outcome indicators provides a comprehensive, easily understandable model for classifying different types of indicator, and it has guided indicator development of most existing quality measurement systems.

Quality indicators should be of high quality and should be carefully chosen and implemented in cooperation with providers and clinicians. The increasing availability of clinical data in the form of electronic health records is multiplying possibilities for quality measurement on the basis of more detailed indicators. In addition, risk-adjustment is important to avoid high-quality providers being incorrectly and unfairly identified as providing poor quality of care – and vice versa, to avoid that poor providers appear to be providing good quality of care. Again, the increasing availability of data from electronic medical records may expand the options for better risk-adjustment.

However, most quality measurement initiatives will continue to focus – for reasons of practicality and data availability – only on a limited set of quality indicators. This means that one of the fundamental risks of quality measurement will continue to be important: quality measurement will always direct attention to those areas that are covered by quality indicators, potentially at the expense of other important aspects of quality that are more difficult to assess through quality measurement.

Nevertheless, without quality information policy-makers lack the knowledge base to steer health systems, patients can only rely on personal experiences or those of friends for choosing healthcare providers, and healthcare providers have no way of knowing whether their quality improvement programmes have worked as expected.

Quality information is a tool and it can do serious damage if used inappropriately. Seven basic principles of using quality indicators are summarized in Box 3.2. It is critical to be aware of the limitations of quality measurement and to be cautious of using quality information for quality strategies that provide powerful incentives to providers, such as public reporting (seeChapter 13) or P4Q schemes (seeChapter 14), as these may lead to potential unintended consequences such as gaming or patient selection.

Box 3.2

Seven principles to take into account when using quality indicators.

References

  • ACSQHC. Indicators of Safety and Quality. Australian Commission on Safety and Quality in Health Care (ACSQHC). 2019. https://www​.safetyandquality​.gov.au/our-work​/indicators/#Patientreported, accessed 21 March 2019.

  • Baker DW, Chassin MR. Holding Providers Accountable for Health Care Outcomes. Annals of Internal Medicine. 2017;167(6):418–23. [PubMed: 28806793]

  • Braithwaite RS. Risk Adjustment for Quality Measures Is Neither Binary nor Mandatory. Journal of the American Medical Association. 2018;319(20):2077–8. [PubMed: 29710277]

  • Bray F, Parkin DM. Evaluation of data quality in the cancer registry: Principles and methods. Part I: Comparability, validity and timeliness. European Journal of Cancer. 2009;45(5):747–55. [PubMed: 19117750]

  • Busse R. Being responsive to citizens’ expectations: the role of health services in responsiveness and satisfaction. In: McKee M, Figueras J (eds.), editors. Health Systems: Health, wealth and societal well-being. Maidenhead: Open University Press/McGraw-Hill; 2012.

  • Calhoun C. Oxford dictionary of social sciences. New York: Oxford University Press; 2002.

  • Campbell SM, et al. Quality indicators for the prevention and management of cardiovascular disease in primary care in nine European countries. European Journal of Cardiovascular Prevention & Rehabilitation. 2008;15(5):509–15. [PubMed: 18695594]

  • Carinci F, et al. Towards actionable international comparisons of health system performance: expert revision of the OECD framework and quality indicators. International Journal for Quality in Health Care. 2015;27(2):137–46. [PubMed: 25758443]

  • Chan KS, et al. Electronic health records and the reliability and validity of quality measures: a review of the literature. Medical Care Research and Review. 2010;67(5):503–27. [PubMed: 20150441]

  • Cheng EM, et al. Quality measurement: here to stay. Neurology Clinical Practice. 2014;4(5):441–6. [PMC free article: PMC4196461] [PubMed: 25317378]

  • Cylus J, Papanicolas I, Smith P. Health system efficiency: how to make measurement matter for policy and management. Copenhagen: WHO, on behalf of the European Observatory on Health Systems and Policies; 2016. [PubMed: 28783269]

  • Davies H. Measuring and reporting the quality of health care: issues and evidence from the international research literature. NHS Quality Improvement Scotland; 2005.

  • Donabedian A. The Definition of Quality and Approaches to Its Assessment. Vol 1. Explorations in Quality Assessment and Monitoring. Ann Arbor, Michigan, USA: Health Administration Press; 1980.

  • EC. So What? Strategies across Europe to assess quality of care. Report by the Expert Group on Health Systems Performance Assessment. European Commission (EC). Brussels: European Commission; 2016.

  • Emilsson L, et al. Review of 103 Swedish Healthcare Quality Registries. Journal of Internal Medicine. 2015;277(1):94–136. [PubMed: 25174800]

  • Evans SM, et al. Prioritizing quality indicator development across the healthcare system: identifying what to measure. Internal Medic ine Journal. 2009;39(10):648–54. [PubMed: 19371394]

  • Fitzpatrick R. Patient-reported outcome measures and performance measurement. In: Smith P, et al. (eds.), editors. Performance Measurement for Health System Improvement: Experiences, Challenges and Prospects. Cambridge: Cambridge University Press; 2009.

  • Forde I, Morgan D, Klazinga N. Resolving the challenges in the international comparison of health systems: the must do’s and the trade-offs. Health Policy. 2013;112(1–2):4–8. [PubMed: 23434265]

  • Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Services Management Research. 2002;15:126–37. [PubMed: 12028801]

  • Fujisawa R, Klazinga N. Measuring patient experiences (PREMS): Progress made by the OECD and its member countries between 2006 and 2016. Paris: Organisation for Economic Co-operation and Development (OECD); 2017.

  • Fujita K, Moles RJ, Chen TF. Quality indicators for responsible use of medicines: a systematic review. BMJ Open. 2018;8:e020437. [PMC free article: PMC6082479] [PubMed: 30012782]

  • Gardner K, Olney S, Dickinson H. Getting smarter with data: understanding tensions in the use of data in assurance and improvement-oriented performance management systems to improve their implementation. Health Research Policy and Systems. 2018;16(125) [PMC free article: PMC6303867] [PubMed: 30577854]

  • Goddard M, Jacobs R. Using composite indicators to measure performance in health care. In: Smith P, et al. (eds.), editors. Performance Measurement for Health System Improvement: Experiences, Challenges and Prospects. Cambridge: Cambridge University Press; 2009.

  • Hurtado MP, Swift EK, Corrigan JM. Envisioning the National Health Care Quality Report. Washington, DC: National Academy Press; 2001. [PubMed: 25057551]

  • ICHOM. Standard Sets. International Consortium for Health Outcomes Measurement (ICHOM). 2019. https://www​.ichom.org/standard-sets/, accessed 8 February 2019.

  • Iezzoni L. Risk adjustment for performance measurement. In: Smith P, et al. (eds.), editors. Performance Measurement for Health System Improvement: Experiences, Challenges and Prospects. Cambridge: Cambridge University Press; 2009.

  • IQTIG. Methodische Grundlagen V1.1.s. Entwurf für das Stellungnahmeverfahren. Institut für Qualitätssicherung und Transparenz im Gesundheitswesen (IQTIG). 2018. Available at: https://iqtig​.org/das-iqtig​/grundlagen/methodische-grundlagen/, accessed 18 March 2019.

  • Kannan V, et al. Rapid Development of Specialty Population Registries and Quality Measures from Electronic Health Record Data. Methods of information in medicine. 2017;56(99):e74–e83. [PMC free article: PMC5608102] [PubMed: 28930362]

  • Kelley E, Hurst J. Health Care Quality Indicators Project: Conceptual framework paper. Paris: Organization for Economic Co-operation and Development (OECD); 2006. Available at: https://www​.oecd.org​/els/health-systems/36262363.pdf, accessed on 22/03/2019.

  • De Koning J, Burgers J, Klazinga N. Appraisal of indicators through research and evaluation (AIRE). 2007. Available at: https://www​.zorginzicht​.nl/kennisbank/PublishingImages​/Paginas​/AIRE-instrument/AIRE​%20Instrument%202.0.pdf, accessed 21 March 2019.

  • Kristensen SR, Bech M, Quentin W. A roadmap for comparing readmission policies with application to Denmark, England and the United States. Health Policy. 2015;119(3):264–73. [PubMed: 25547401]

  • Kronenberg C, et al. Identifying primary care quality indicators for people with serious mental illness: a systematic review. British Journal of General Practice. 2017;67(661):e519–e530. [PMC free article: PMC5519123] [PubMed: 28673958]

  • Lawrence M, Olesen F. Indicators of Quality in Health Care. European Journal of General Practice. 1997;3(3):103–8.

  • Lighter D. How (and why) do quality improvement professionals measure performance. International Journal of Pediatrics and Adolescent Medicine. 2015;2(1):7–11. [PMC free article: PMC6372368] [PubMed: 30805429]

  • Lilford R, et al. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004;363(9424):1147–54. [PubMed: 15064036]

  • Lohr KN. Medicare: A Strategy for Quality Assurance. Washington (DC), US: National Academies Press; 1990.

  • Lüngen M, Rath T. Analyse und Evaluierung des QUALIFY Instruments zur Bewertung von Qualitätsindikatoren anhand eines strukturierten qualitativen Interviews. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 2011;105(1):38–43. [PubMed: 21382603]

  • Mainz J. Defining and classifying indicators for quality improvement. International J ournal for Quality in Health Care. 2003;15(6):523–30. [PubMed: 14660535]

  • Mainz J, Hess MH, Johnsen SP. Perspectives on Quality: the Danish unique personal identifier and the Danish Civil Registration System as a tool for research and quality improvement. International Journal for Quality in Health Care (efirst). 2019. https://doi​.org/10.1093/intqhc/mzz008. [PubMed: 31220255]

  • Marjoua Y, Bozic K. Brief history of quality movement in US healthcare. Current reviews in musculos keletal medicine. 2012;5(4):265–73. [PMC free article: PMC3702754] [PubMed: 22961204]

  • NHS BSA. Medication Safety – Indicators Specification. NHS Business Services Authority (NHS BSA). 2019. Available at: https://www​.nhsbsa.nhs​.uk/sites/default/files​/2019-02/Medication​%20Safety%20-%20Indicators​%20Specification.pdf, accessed 21 March 2019.

  • NHS Digital. Indicator Methodology and Assurance Service. NHS Digital, Leeds. 2019a. Available at: https://digital​.nhs.uk​/services/indicator-methodology-and-assurance-service, accessed 18 March 2019.

  • NHS Digital. Patient Reported Outcome Measures (PROMs). NHS Digital, Leeds. 2019b. Available at: https://digital​.nhs.uk​/data-and-information​/data-tools-and-services​/data-services​/patient-reported-outcome-measures-proms, accessed 22 March 2019. [PubMed: 30591362]

  • NHS Employers. 2018/19 General Medical Services (GMS) contract Quality and Outcomes Framework (QOF). 2018. Available at: https://www​.nhsemployers​.org/-/media/Employers​/Documents/Primary-care-contracts​/QOF​/2018-19/2018-19-QOF-guidance-for-stakeholders.pdf, accessed 21 March 2019.

  • NQF. Quality Positioning System. National Quality Forum (NQF). 2019a. Available at: http://www​.qualityforum​.org/QPS/QPSTool.aspx, accessed 19 March 2019.

  • NQF. Measure evaluation criteria. National Quality Forum (NQF). 2019b. Available at: http://www​.qualityforum​.org/measuring_performance​/submitting_standards​/measure_evaluation_criteria​.aspx, accessed 19 March 2019.

  • Oderkirk J. International comparisons of health system performance among OECD countries: opportunities and data privacy protection challenges. Health Policy. 2013;112(1–2):9–18. [PubMed: 23870099]

  • OECD. Handbook on Constructing Composite Indicators: Methodology and user guide. Organisation for Economic Co-operation and Development (OECD). 2008. Available at: https://www​.oecd.org/sdd/42495745.pdf, accessed 22 March 2019.

  • OECD. Improving Value in Health Care: Measuring Quality. Organisation for Economic Co-operation and Development. 2010. Available at: https://www​.oecd-ilibrary​.org/docserver/9789264094819-en​.pdf?expires​=1545066637&id​=id&accname​=ocid56023174a&checksum​=1B31D6EB98B6160BF8A5265774A54D61, accessed 17 December 2018.

  • OECD. Strengthening the international comparison of health system performance through patient-reported indicators. Paris: Organisation for Economic Co-operation and Development; 2017. Recommendations to OECD Ministers of Health from the High Level Reflection Group on the future of health statistics.

  • OECD. Patient-Reported Indicators Survey (PaRIS). Paris: Organisation for Economic Co-operation and Development; 2019. Available at: http://www​.oecd.org/health/paris.htm, accessed 8 February 2019.

  • OECD HCQI. Definitions for Health Care Quality Indicators 2016–2017. HCQI Data Collection. Organisation for Economic Co-operation and Development Health Care Quality Indicators Project. 2016. Available at: http://www​.oecd.org/els​/health-systems/Definitions-of-Health-Care-Quality-Indicators.pdf, accessed 21 March 2019.

  • Papanicolas I, Smith P. WHO, on behalf of the European Observatory. Open University Press; Maidenhead: 2013. Health system performance comparison: an agenda for policy, information and research.

  • Parameswaran SG, Spaeth-Rublee B, Alan Pincus H. Measuring the Quality of Mental Health Care: Consensus Perspectives from Selected Industrialized Countries. Administration and Policy in Mental Health. 2015;42:288–95. [PubMed: 24951953]

  • Pfaff K, Markaki A. Compassionate collaborative care: an integrative review of quality indicators in end-of-life care. BMC Palliative Care. 2017;16:65. [PMC free article: PMC5709969] [PubMed: 29191185]

  • Porter M. What is value in health care. New England Journal of Medicine. 2010;363(26):2477–81. [PubMed: 21142528]

  • Pukkala E, et al. Nordic Cancer Registries – an overview of their procedures and data comparability. Acta Oncologica. 2018;57(4):440–55. [PubMed: 29226751]

  • Radford PD, et al. Publication of surgeon specific outcome data: a review of implementation, controversies and the potential impact on surgical training. International Journal of Surgery. 2015;13:211–16. [PubMed: 25498494]

  • Romano PS, et al. Impact of public reporting of coronary artery bypass graft surgery performance data on market share, mortality, and patient selection. Medical Care. 2011;49(12):1118–25. [PubMed: 22002641]

  • Rubin HR, Pronovost P, Diette G. The advantages and disadvantages of process-based measures of health care quality. Interna tional Journal for Quality in Health Care. 2001;13(6):469–74. [PubMed: 11769749]

  • Shwartz M, Restuccia JD, Rosen AK. Composite Measures of Health Care Provider Performance: A Description of Approaches. Milbank Quarterly. 2015;93(4):788–825. [PMC free article: PMC4678940] [PubMed: 26626986]

  • Smith P, et al. Introduction. In: Smith P, et al. (eds.), editors. Performance Measurement for Health System Improvement: Experiences, Challenges and Prospects. Cambridge: Cambridge University Press; 2009.

  • Steinwachs DM, Hughes RG. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008. Health Services Research: Scope and Significance. [PubMed: 21328752]

  • Terris DD, Aron DC. Attribution and causality in health-care performance measurement. In: Smith P, et al. (eds.), editors. Performance Measurement for Health System Improvement: Experiences, Challenges and Prospects. Cambridge: Cambridge University Press; 2009.

  • Voeten SC, et al. Quality indicators for hip fracture care, a systematic review. Osteoporosis International. 2018;29(9):1963–85. [PMC free article: PMC6105160] [PubMed: 29774404]

  • Westaby S, et al. Surgeon-specific mortality data disguise wider failings in delivery of safe surgical services. European Journal of Cardiothoracic Surgery. 2015;47(2):341–5. [PubMed: 25354748]

Measuring healthcare quality (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 6146

Rating: 5 / 5 (50 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.