It has been called the smallest difference that clinicians and patients would care about. The concept of Minimum Clinically Important Difference (MCID) has been around for over 20 years, but the literature still can’t quite decide what to make of it.
It is an attempt to address that terrible old medical non-joke ‘the operation was a success but the patient died’ – to give clinicians a handle on what really matters - the outcome from the patient’s point of view.
For the patient, if you are ill, and you are given a treatment, at the end of it do you feel that the treatment has helped? For the clinician to work it out, he basically has to ask the patient. A treatment given to an asthma patient, for example, might increase his lung capacity by a statistically-wonderful 5 percent; but if the patient feels no better the statistic is pretty meaningless – and the treatment has failed to achieve MCID.
Conceptually, therefore, MCID represents a threshold for some clinically meaningful improvement in a patient’s health-related quality-of-life (HRQoL), as measured by what the patient reports. There are standard questionnaires, such as the Short Form 36-item questionnaire (SF-36) or the EuroQol 5D (EQ-5D), which the patient can fill in; and from these a numerical score can be derived that gives some indication of his or her quality of life. The key word here is ‘quality’. The concept of MCID avoids the use of “statistically significant” results that are qualitatively meaningless, and dovetails with the general distinction between practical relevance and statistical significance.88
Despite its intuitive appeal, the appropriate estimation and use of MCID remains unclear. At least two general estimation strategies have been espoused in the literature: 1) the anchor-based approach; and 2) the distributional approach. The anchor-based approach uses some external responses, such as whether the patient reported feeling “better” or “worse” after treatment, and calculates the MCID by comparing the HRQoL of patients in the “better” group to those of the “worse” group. Conversely, the distributional approach calculates MCID based on the within-sample change in HRQoL scores relative to some measure of variability in the distribution. The distributional approach therefore bases its calculation of MCID solely on existing HRQoL responses and does not rely on an external criterion.
Each approach carries with it some fundamental flaws. For example, in practice the external criterion used in the anchor-based approach is often some other subjective questionnaire. Researchers are essentially converting one subjective measure to match another. If this is the way to go, why not simply drop the second questionnaire and analyze just the external criterion?
While the anchor-based approach suffers from multiple measures of subjectivity, the distributional approach suffers from a different flaw. The point of the MCID is to avoid reliance on statistical results with no qualitative value, but the distributional approach places a statistical framework around the calculation of the MCID. For example, some authors estimate MCID as one standard error above the mean score. Far from an “alternative” to statistical significance, this approach amounts to simply changing the threshold for statistical significance.
Perhaps more importantly, the general concept of MCID is somewhat lacking. We’re essentially assigning an objective threshold to a subjective metric. The point of incorporating HRQoL into the evaluation of health care programs was to acknowledge the importance of a patient’s self-reported health assessment and to distinguish self-reported outcomes from objective clinical measures. Efforts to morph the prior into the latter seem misplaced.