Today, we're going to take a 30,000-foot
view and consider the major policy motivations for evaluation.
We will move beyond the conceptual and philosophical reasons and try to get pragmatic.
Sound good? Okay, let's go.
A fundamental reason to do evaluation is to drive toward evidence-based policy-making.
Evidence-based policy-making is a global trend,
where instead of focusing on inputs when deciding on which policies to advance,
the focus is on the outcomes and results we are trying to achieve.
Inputs include dollars to invest,
amounts of raw materials needed or numbers of workers to hire.
Outcomes, on the other hand, include more jobs,
less people living below the poverty line or more people with health insurance.
So instead of focusing on how much money we want to invest in improving public health,
we set targets on how many more individuals we want to be covered by health insurance.
Makes sense? Doctors in
this course may be familiar with the concept of evidence-based medicine,
also known as evidence-based practice or evidence-based clinical decision-making.
This was pioneered by a remarkable Stanford medical professor named David Eddy,
who had a PhD in operations research while working as a surgeon in their medical center.
He had a simple but very elegant insight that was very novel at the time.
He stated that we need formal methods rather than
expert opinion to inform the guidelines our doctors use to treat us.
In Dr. Eddy's own words, "Traditionally,
a decision to use a particular treatment could be
justified by little more than the claim that it was standard and
accepted or that the individual physician believed that it was in
the patient's best interest or simply that no other treatment was available."
However, the credibility of clinical judgment whether exercised individually or
collectively has been severely challenged
by observations of wide variations in practices,
inappropriate care and practitioner uncertainty.
The presumption that if a treatment is widely used,
it must have some benefit has been shaken,
not only by reports that
many common treatments have no supporting evidence of effectiveness,
but by actual trials that have overturned some common beliefs,
such as the value of flecainide for heart attacks and steroids for acute optic neuritis.
The response has been a gradual but persistent movement to require
documentation beyond the testimony of experts in the existence of
a consensus and to require empirical evidence of
benefit before recommending a treatment be used.
That's a mouthful. But what he did is he started with screening for
cancer like pap smears for cervical cancer and colon cancer tests.
And he showed that they should not be done every year.
At the time, in the 1980s and before, they were done annually.
His seminal mathematical modeling work in the journal Cancer in 1987 matched
empirical data on cervical cancer occurrence from
trials in the setting of different screening intervals.
This was so influential that it went on to
spawn entirely new field of cost-effectiveness studies,
including of pap smears,
which along with new evidence on HPV strains 16 and 18 and direct testing for HPV,
now inform the cervical cancer screening guidelines from
the United States Preventive Services Task Force or
USPTF under which we physicians practice today.
Now, a woman only needs a pap smear every few years depending on her clinical situation.
So what a difference some data-driven evidence makes on healthcare cost,
not to mention quality of life.
Can you imagine getting a pap smear every year or
worse a colonoscopy every year to check for colon cancer? Not me.
So the same now applies to public and private sector policy-making.
We need to formally develop evidence on what works and what doesn't.
It is also being used to enhance accountability and inform budget allocations over time.
You can imagine that governments and citizens are much happier to give
officials funding if policies have measured demonstrated impact.
Evaluations are thus the core of evidence-based policy-making.
That's how we build knowledge about the effectiveness of programs.
For example, do accountable care organizations or
ACOs improve the quality of care for Medicare beneficiaries with chronic disease?
Do bundle payments for joint replacements lead to improved functional outcomes,
like the ability for a patient after a joint replacement to
walk upstairs or reduce or post-operative pain.
Or does Medicaid expansion save lives?
For example, does it reduce mortality?
So policy evaluations thus assess the changes in the well-being of
individuals that can be attributed to a particular project, program or policy.
This focus on attribution is the hallmark of policy impact evaluations.
Correspondingly, the central challenge in carrying out
effective impact evaluations is to identify the causal relationship between the project,
program or policy and the outcomes of interest.
I want you to remember these two phrases,
attribution and causal relationship,
as we will come back to them again soon.
Another important characteristic of most but
not all policy evaluations is that they generally
estimate the average impact of a program on its beneficiaries.
For example, did a subsidy and
mandate increase the number of individuals purchasing health insurance,
or did ACOs reduce the total cost of care for Medicare beneficiaries assigned to them?
These focus on the average American or
the average Medicare beneficiary or ACO who is affected by that policy.
A more recent trend in evaluation is to look at
the heterogeneity in effects or the effect among subgroups.
For example, did that subsidy and mandate help in obtaining health insurance for
people just over the poverty line just as much as for those well above it?
Or did ACOs help Medicare beneficiaries in rural areas,
just as much as those in urban environments?
By getting more information on the average effects
or main effects and the subgroup effects,
we can best choose between policy options.
Great, so today we covered evidence-based
policy-making as a if not the key driver for evaluations.
We discussed Dr. David Eddy and the analogy
to evidence-based medicine for the doctors in the crowd.
And we highlighted some concepts like
average or main effects versus subgroup effects. See you next time.