>> Good afternoon. It's very much our pleasure
to be sharing our talk today, entitled, Orientation
to the Science of Dissemination and Implementation.
We want to start by thanking the folks who
made this possible today. We want to thank
Sarah Birkin, Rebecca Selah, [assumed spelling],
Sarah Bernal, David Chambers and the NCI for
this great idea and for giving us the opportunity
to present today. We'll start by giving you
a sense of what the objective of this talk
is, which is really to give a broad overview
of the field of implementation science, so
it's going to be a survey, and we have prepared
a resource guide, which will be paired with
this talk on the website, which will give
you additional resources, if once you drink
the Kool-Aid, you decide you're as interested
in implementation science as we are. We also
invite you to follow us on Twitter for the
latest happenings in implementation science.
I'm just going to start with very brief introductions
of myself and my colleagues who will be speaking
today. So, I will start. My name is Rinad
Beidas, I'm an associate professor of psychiatry
and medical ethics and health policy at the
University of Pennsylvania. I'm joined by
Dr. Cara Lewis who is an associate investigator
at Kaiser Permanente Washington Health Research
Institute, and also Dr. Byron Powell who is
an assistant professor of Health Policy and
management at the Gillings School of Global
Public Health. Dr. David Chambers, who's the
deputy director of implementation science
at the National Cancer Institute will also
be joining us today and will be asking up
some questions about the field. Very briefly,
this is the overview of what we hope to accomplish
today, so we'll start by giving you just a
very big picture and introduction to the field
of implementation science. Dr. Lewis and I
will provide two case studies from our ongoing
empirical work that we'll try to infuse throughout
the whole presentation. Then we'll discuss
implementation frameworks and process models
and talk about how to identify and prioritize
barriers and facilitators to the implementation
process. Then we'll talk about implementation
strategies and tests of implementation strategies,
evaluating your implementation efforts, and
we'll have some time for discussion and question
and answer at the end. So, without further
ado, I will pass it on to my colleague, Dr.
Powell.
>> Thank you, Rinad, and hello, everyone.
Just by way of foundation, we know that we
have a lot of challenges in translating research
into practice and this is a figure that's
drawn from Balas and Boren in a study that's
now almost 20 years old that -- that really
depicts that moving from original research
to patient and population health impact takes
a lot of time. So, by the time we -- we conceptualize
our research, submit it to funding agencies,
and then later get it obviously published
in peer reviewed journals, hopefully into
bibliographic databases, and then subsequently
into reviews, guidelines, and textbooks, where
it's often picked up more widely, it takes
a great deal of time. So, Balas and Boren
really illustrate the -- the -- sort of the
leaky pipeline here and demonstrate that it
takes 17 years to turn 14% of original research
to the benefit of patient care. And again,
this is an old study, but I'm told from colleagues
at NCI that this is being actually replicated
in a -- in a more recent study and that we
haven't gotten much better at moving research
into practice. This really forms the -- the
-- the foundational argument for implementation
science. We've all heard the adage, from bench
to bedside, that we hope that basic and clinical
research is turned into the benefit of patient
care, and what we often see is really from
bench to bookshelf. This is really well documented
in a number of key reports from the Institute
of Medicine, so this To Err is Human report
I think is from -- published in 1999 and then
recently, the National Academies published,
Crossing the Global Chasm that demonstrates
that these gaps between what we know works
and what we actually do in routine care settings
are still enormous and that we have a lot
to do to introduce evidence into practice.
So, dissemination and implementation science
has really been prioritized, certainly, in
the U.S. and -- and internationally, this
is a very U.S. centric slide but demonstrates
that for the NIH, AHRQ, National Academies,
and a number of other federal institutes and
private foundations, implementation science
has become a big priority, demonstrating,
in particular, that we spend a lot on basic
-- we spend a lot of money on basic and clinical
research, but we spend far less on understanding
how we can actually translate the findings
of that research into routine care. We -- we
define -- or I should say, the NIH program
announcement on DNI defines dissemination
research as a scientific study of targeted
distribution of information and intervention
materials to a specific public health or clinical
practice audience, so the intent here is really
to understand how to best spread and sustain
knowledge and the associated evidence based
interventions. And the program announcement
defines implementation research as a scientific
study of the use of strategies to actually
adopt and integrate evidence based health
interventions into clinical and community
settings ultimately to improve patient outcomes
and benefit population health. These are sort
of another set of related terms that you often
see in the literature, so we -- we see diffusion
from Everett Rogers' work on diffusion of
innovations, dissemination, and implementation,
and then perhaps in a more straightforward
way, we can think about diffusion is really
letting it happen, letting innovations become
routinely used. Over time, dissemination is
helping it happen, and implementation is making
it happen. So, we want to acknowledge at the
outset that implementation -- science -- dissemination,
implementation science are influenced by multiple
fields and disciplines, really we've -- we've
begged, borrowed, and stolen from a lot of
interesting places, including improvement
science, intervention effectiveness, and process
research, healthcare, behavioral economics,
medical anthropology, social psychology, and
organizational, and management, and these
are just really a smattering of the -- the
areas that we've borrowed from. So, as you
move forward in this field, it's often fruitful,
in fact, my colleagues may share some examples
of ways in which they've drawn from behavioral
economics and other fields in their work.
This just demonstrates one translational pipeline
model that I think is useful in thinking about
implementation science and so we can think
about some of our basic and -- and sort of
preintervention work and then how that leads
into efficacy studies that are highly controlled
and are really designed to tell us, could
an intervention work. And then moving to route
more real-world effectiveness studies, where
we have a little bit less researcher control,
and we're really trying to answer the question
of, does an intervention work. When we move
into dissemination and implementation research,
we're really trying to think about how we
can make an intervention work in real-world
settings of care and in this upper right hand
purple box, you see that -- that implementation
is often conceptualized in multiple phases,
so starting here at exploration, preparation,
implementation, and sustainment, this is drawn
from Greg Aarons EPIS framework, we see that
-- that sort of the iterative and multiphase
nature of implementation science. And this
just -- this box also demonstrates implementation
practice and implementation research and so
you can see implementation practice focusing
much more on local questions, can we make
this intervention work in this specific setting,
whereas implementation science or implementation
research really focuses heavily on developing
generalizable knowledge and can we generate
some methods and procedures that will work
across settings and disease conditions. Another
way of thinking about this effectiveness versus
implementation question is that in much of
our effectiveness research, we're really focusing
on evaluating health outcomes, and focusing
on the intervention that we're trying to implement,
whereas in implementation science, we're really
focusing more on the system to support adoption
and delivery with fidelity, so evaluating
outcomes related to the quality, quantity,
and speed of delivery. This is yet another
model, this is drawn from [inaudible] Proctor
and colleagues that demonstrates sort of our
usual way of thinking about clinical and effectiveness
research is that we're focusing on the what,
so sometimes specific interventions, innovations,
or evidence based practices, and really focusing
on the health outcome, so satisfaction, functioning,
and health status, and symptoms, that's sort
of the usual. Implementation really focuses
on a different set of outcomes and also a
different set of interventions, so we'll talk
today about implementation strategies, which
are really the how of -- of implementation,
and we'll also talk in a little bit more detail
about implementation outcomes, but these are
the sort of intermediate indicators that implementation
is successful, and they help us differentiate
between intervention failure and implementation
failures, so things like are interventions
acceptable, appropriate, feasible. Can we
get organizations and systems to adopt these
interventions with fidelity? Can we look at
things like penetration or reach over time,
so the proportion of clinics or clinicians
that are implementing a given intervention?
And then ultimately, can we -- can we sustain
and scale up these interventions? So, that's
really the core of implementation research
and much of what we'll focus on today. In
much more plain language, we're drawing from
our colleague, Geoff Curran here, we can think
about the intervention practice or innovation
as the thing. We can think about implementation
strategies as being the stuff that we do to
try to help people in places do the thing.
And then when we think about implementation
outcomes, we can think about them as -- as
really helping us understand how well individuals,
organizations, and systems do the thing. Move
on to questions for this section.
>> Thanks, Byron, for -- for getting us kicked
off. A lot of people come up to us and they
say, you know, I'm still struggling to see
the difference between the implementation
study and the effectiveness research study
and you talk about the thing, they may look
at their outcomes of their effectiveness study
and they may say, but that's also -- I'm really
focusing on how well we do that thing. I wonder
if each of you might be able to just give
us a little bit more, you know, more -- more
guidance on, how do we make that distinction
between our -- our effectiveness research
and our implementation studies when you look
at these sort of two studies side by side?
Are there other things that help you either
put it squarely in the implementation bucket
or squarely in the effectiveness bucket, anyway?
>> Yeah, so -- so one of the first things
I'll say is that -- that oftentimes, with
implementation studies, there's much more,
again, focused on the implementation strategy,
so evaluating the interventions that we're
actually using to get the clinical innovation,
or intervention, or the thing into practice.
So, oftentimes, we're really focusing on,
you know, are those strategies acceptable,
feasible, and appropriate, and do those actually
lead to improved fidelity of the clinical
intervention. The other thing that I can just
point to that -- that's to kick us down the
road is that we're increasingly seeing this
sort of blur between effectiveness studies
and implementation studies and so we'll talk
down the road about hybrid implementation
-- or hybrid effectiveness, implementation
trials, and so there's different ways of conceptualizing
that. But maybe Cara and Rinad have additional
thoughts.
>> I'd be happy to chime in to offer something
that actually might not provide additional
clarification but just highlight the muddiness.
So, especially if you're working in a multilevel
systems intervention, those distinctions become
less and less clear, I think. And so, I often
find myself having conversations with colleagues
who are, for instance, trying to implement
a guideline and they're talking about an intervention
that includes things like audit and feedback
that I might think about as an implementation
strategy, but they're thinking about as core
to their intervention to get this guideline
going. And so, I don't think the distinction
is always clear and it might not need to be
made more clear, but the fact that you begin
to think about these things, what is the set
of interventions or the components of an intervention
that are intended to deliver the immediate
clinical or patient level outcome and then
stepping back from that a level, what are
the things I'm trying to do to help get that
to happen can help us take the implementation
science ones. Rinad, would you add?
>> Yeah, I mean I think that answers you guys
gave were great. If -- if I were to kind of
back it up all the way in to try to give a
black and white answer, which it -- it's not,
but if -- if -- I sometimes I'm asked to do
that by my students. They get frustrated with
me and they say, just tell us, how do we know,
you know, what kind of trial it is is really
starting by looking at the outcome. Now, of
course, the points that Dr. Lewis just raised
are -- are very accurate, once you start doing
multilevel types of interventions. However,
I do think that often, folks will come to
me and they're just interested in patient
outcomes, and in that case, it's easier for
me to say, without adding an emphasis on the
implementation process, whether it be the
bearers or facilitators, or the strategies
that you're using, it's -- it's easier to
say what's not implementation science than
sometimes what is implementation side.
>> Cool. Thanks.
>> I think you're up, Rinad, is that --
>> Yeah, fantastic. So, I'm thrilled to have
the opportunity to share a little bit of information
with you all about a project that I did over
five years, which represents a naturalistic
observational study. I have the good fortune
of being a part of a public mental health
system that was very invested and continues
to be very invested in use of evidence based
practice, particularly cognitive behavioral
therapies that are psychosocial in nature.
So, this is a city -- a picture of the beautiful
city of Philadelphia, where I'm lucky to live.
And over the course of about a decade, our
previous Commissioner of mental health, Dr.
Arthur Evans, who sat at the helm of the department
of behavioral health and intellectual disability
services was very interested in the question
of how to get access to folks being served
by the public mental health system access
to evidence based practices and this was especially
salient to him given that many of the folks
who develop the leading evidence based practices,
such as Dr. Aaron Beck, and cognitive therapy
live here in Philadelphia and did that work
here in Philadelphia. And so, various evidence
based practices that are all cognitive behavioral
therapy in nature and you don't need to worry
about the details of each of the initiatives
but I share this so that you can see that
over the course of about a decade, a number
of evidence based practices were rolled out,
starting with cognitive therapy, followed
by prolonged exposure, dialectical behavior
therapy, and then parent child interaction
therapy, so again, these are all variants
of types of cognitive behavioral therapy,
and they were rolled out over the course of
a decade. So, at some point in the process
of rolling out various evidence based practices,
Dr. Evans began to realize that the various
initiatives that I just briefly described
were somewhat siloed from one another and
were not learning from one another and were
facing similar step points in the process
of implementation. And so, he developed a
center that represents a public academic collaboration
called EPIC, which stands for the Evidence
based Practice and Innovation Center, which
was created really to bring together all of
these various initiatives where evidence based
practices were being implemented across a
large city serving over 100,000 consumers
through the public mental health system annually.
And what you can see here is a -- is a map
of Philadelphia County and each of these yellow
stars represent an agency in the network that
is implementing at least one of those evidence
based practices through the various initiatives
that I described. So, I want to be really
clear, this study that I'm describing was
-- was very observational, so I, personally,
was not manipulating any implementation strategies,
and I think that taking advantage of these
natural experiments, or systems, or implementing
evidence based practices can be very, very
valuable, and I feel very lucky that I had
the opportunity to -- to do an observational
study, to understand how the implementation
strategies undertaken by a system of care
resulted in use of evidence based practice.
So, this paper led by -- by Byron actually
describes the various ways that the City of
Philadelphia supported evidence based practices,
and included a range of training and consultation,
contracting for evidence based practice delivery,
hosting events that highlight evidence based
practice champions, a newer designation process
where organizations are designated of evidence
based practice agencies, and enhanced rates.
So, this represents the bundle of implementation
strategies that were -- were implemented by
the system as part of EPIC, which is that
larger, kind of infrastructure -- centralized
infrastructure intended to support evidence
based practices in our city. Again, I'm not
going to get into all of the details, but
I just want to give you a brief sense of the
data collection that we did in this project,
so as I mentioned, five years. The first wave
in 2013 preceded the creation of EPIC and
we were able to use purpose of sampling to
reach out to about 30 agencies that provided
the both of services to children in the City
of Philadelphia. And then again, we did continued
sampling in 2015 and 2017. And over the course
of our study, were able to enroll about 500
therapists and 100 administrators to really
get a sense of what happens to use of evidence
based practice over the course of the creation
of the centralized infrastructure and all
of these implementation strategies that were
rolled out. And because it's very important
to use an implementation science framework,
or some kind of framework in the design and
carrying out of your study, we endeavor to
do the same, and so we used a framework called
the EPIS framework, which was developed by
Dr. Greg Aarons' and colleagues, it stands
for exploration, preparation, implementation,
and sustainment. This is a prestaging because
we'll talk about this more in the next section.
But briefly, this is a determinant framework,
which means that it lists a number of contextual
variables that may be of interest in the implementation
process. So, our primary research question
was focused on whether a system level policy
or the creation of the centralized infrastructure
would result in increased use of evidence
based practices, but we were also interested
in the effective contextual factors, such
as organizational culture and climate, implementation
climate and leadership on -- on that process.
So, the main questions that we were interested
in, in this case, study, that I'll try to
pepper into the remaining sections, when relevant,
is, does the use of cognitive behavioral therapy,
which is the evidence based practice that
we were interested in, increase over the five-year
period as a result of the creation of the
centralized infrastructure. What types of
contextual factors drive clinician behavior
in a large system implementing evidence based
practice, so again, our outcome was about
clinician behavior. What our stakeholder perspectives
on barriers and facilitators to implementation
of evidence based practice, so we got to use
mixed methods to try to unpack that? And then
we started getting very interested towards
the tail end of this work in the temporal
relationships between the various constructs,
the contextual determinants, to try to understand
mechanisms of the strategies that were used.
>> For the second study I will describe -- I'll
just offer a quick overview of a randomized
trial where we compared two approaches to
implementing measurement based care and community
mental health settings and if you're interested,
the full protocol has been published in implementation
science. So, measurement based care is an
evidence based structure or framework for
informing treatment, it's used for a variety
of conditions across settings sectors and
providers. Measurement based care can be defined
as the systematic evaluation of patient symptoms
prior to or during a clinical encounter to
inform treatment. And our team conceptualizes
measurement based care as having at least
three core components to demonstrate fidelity,
each of which adds incremental impact to the
patient, so the first is administration of
a measure. In most cases, that's a patient
reported outcome, but it could also be an
objective measure. Following that, there is
the review of data by the provider or care
team and the patient. And then finally, a
discussion of the data to inform care in that
particular clinical encounter and then in
an ongoing fashion to inform treatment. And
what we did as our primary aim was try to
address this question of whether standardized
or tailored measurement based care implementation
would impact clinician level outcomes and
client level outcomes or patient level outcomes,
so we were interested both in the impact of
tailored and standardized approaches on fidelity
to measurement based care, but then also on
patient level depression symptom outcomes
in the case of our study because it was the
case that the measurement based care intervention
could be tailored to the clinic and we were
thinking that might change the impact measurement
based care otherwise is thought to have based
on previous research. So, this gives you a
high level overview of our study design, so
it was a dynamic cluster randomized trial.
Clinics were first randomized to -- or they
were matched first actually on -- based on
size and then rural and urban status. Then
they were randomized to cohort and then to
condition. And we used a mixed method data
collection approach at various time point,
so these vertical lines are depicting assessment
time points, bookending essentially the active
implementation period, and then a follow-up
sustainment period. And then to appreciate
the implementation intervention, it's a quite
a busy slide, so what I just want to draw
your attention to is the fact that this first
column indicates a set of contextual factors
that were informed by a process model that
I'll show you in a moment, and they gave us
a sense of how we should design a blended
implementation protocol of strategies to address
each of these contextual factors or known
determinants. And that in the case of the
standardized implementation approach, we pulled
together what we thought of as the best practices
to address -- address each of those determinants,
and then in the case of the tailored focus,
it was -- the content was contextualized and
this was highly collaborative, and so that
was -- those were the main differences between
conditions. And so, we'll come back to each
of these two studies throughout the talk,
as Rinad indicated at the beginning, but just
wanted to see if there's any immediate questions
about either of those case studies.
>> Sure. So, -- so actually, given -- given
that we have likely a heterogeneous group
in terms of their interests, you mentioned
some wonderful examples that I think underscore
the leadership of -- of those of you and other
mental health investigators in advancing implementation
science, I wonder if you might have any other
examples of -- outside of behavioral health
that might also just -- just explain the breadth
of the kind of work that you've participated
in or -- or know of in the field.
>> Sure. We just got funding from Sequori
[assumed spelling] to explore actually the
-- the impact of the EPIS model and the dynamic
adaptation process on implementing shared
decision-making for bariatric surgery in primary
care settings, so that's an example case.
Byron or Rinad, would you like to add others?
>> Yeah, sure. One of my favorite parts about
being an implementation scientist is I get
to learn lots of things about areas that I
never imagined that I would learn about. So,
I've been -- I've had the fortunate opportunity
of getting to work with some folks doing cancer
care work here at the University of Pennsylvania,
so this work has -- was -- has been led by
Dr. [inaudible] and Dr. Frank Leone, where
they have been interested in trying to increase
referrals, such as evidence based tobacco
cessation treatment in patients with cancer,
and so as part of this NCI funded work, they
created an automatic referral as part of the
electronic health record for patients who
endorsed smoking as part of their cancer care
to see if that would result in increased referral
to evidence based tobacco cessation treatments,
and so that's been a really exciting project
that I've been fortunate to be involved in.
>> And I'll just add a, bit, been really thrilled
to have the opportunity to work with Rachel
Gould, who has funding from NHLBI to work
with federally qualified health centers and
to think about how can we better implement
social needs screening within that setting,
so she's really thinking through both the
type of technical assistance that that FQHCs
need to implement social needs screening,
as well as sort of how do the -- how does
that support need to be tailored. And it's
a particularly interesting area because it's
one in which I think the -- the -- this is
not my area of expertise -- but in which the
social need screening processes, itself, are
not necessarily well-defined and different
FQHCs may be implementing quite different
interventions or things, if you will, depending
on their specific needs and interest and -- and
resources available. So, it's been fascinating
to learn from Rachel and her team in that
area.
>> Cool and it and it just sounds like from
each of your experiences and others that,
you know, what we can learn about implementation
across the different context, across different
disease areas, we'll hopefully advance our
understanding overall, so thanks for that.
>> Great.
>> Okay. The next section is our entree into
implementation frameworks, and so this has
been probably an area of greatest proliferation
and development for implementation science.
As of 2012, there were over 60 implementation
models, frameworks, theories identified in
this systematic review led by Rachel [inaudible]
and colleagues. This seems to be a favorite
quote that Byron so gratefully identifies
the originating source, frameworks do seem
like toothbrushes where everyone has one and
no one else -- no one wants to use anyone
else's. And so, given that is the case, this
proliferation actually creates a lot of confusion
for folks new to the field, and so some commonalities
could be helpful to draw here. So, for instance,
implementation does seem to be a multiphased
process, and so Rinad already mentioned the
EPIS model that Greg Aarons and colleagues
developed, which suggests that there are at
least four stages to implementation, exploration
is typically followed by an adoption to decision,
and I will say I feel like that particular
phase is under studied and oftentimes, people
seem to come to us as being implementation
scientists, having already decided on an evidence
based practice, I think that's sort of interesting
in an understudy phase. The next phase is
preparation, after which you see this sort
of active implementation work, such as training
and coaching beginning, leading through the
implementation phase, after which you really
hope to see that the evidence based practice
is delivered with fidelity, and then sustainment
phase following that, just because these are
ordered linearly doesn't you wait till the
fourth phase to think about sustainment, importantly,
that should be considered even before you've
decided which evidence based practice you'd
like to have implemented. The other commonality
we wish to draw is that implementation is
inherently multilevel, so across models and
frameworks, we see that the individual receiving
the evidence based practice is impacted, but
so is the team, the clinic, the organization,
and the system, and so on, and different authors
depicts these levels differently. And oftentimes,
knowing the target level can help you choose
implementation strategies, which we'll talk
about in a little bit. I'm really grateful
for this paper by [inaudible] Nilsen, who
offers a typology of theories, and frameworks,
and models, and helps us understand the difference
between these and when each might be most
useful. And so, we won't take a magnifying
glass to each of these types, but we will
drill down into several of these types of
models and frameworks today and talk -- talk
about why you would use one versus the other
and how that shakes out. So, the first set
is implementation process models, these are
quite common, I think especially among implementation
practitioners, so people doing the hard work
of implementation, and maybe less common in
some empirical trials -- when -- when trying
to assess determinants and -- and - strategies,
and you'll see why as we unpack each of these
types. But just to give you a nice example
of one process model coming out of Canada
where they're using the language of knowledge
translation as an umbrella term under which
falls dissemination and implementation, this
KTA knowledge to action cycle is a widely
used one by Graham and colleagues, published
in 2006. And then one that I've used it's
kind of a hybrid process slash determinate
slash evaluation model, but I think because
of its limited [inaudible] specificity, I
think of it more as a process model, so we
use this in the randomized trial that I just
presented on moments ago. Over here in the
top left, you see that context of diffusion,
so these are six buckets of contextual factors
that are typically influential in an implementation
process and that they should be considered,
and they will impact these three stages of
diffusion or phases of implementation. And
then throughout that work, you'll be paying
attention to patient care, health outcomes,
and organization, and system outcomes. And
the process that you might undertake to successfully
move through these stages or phases would
be beginning with a capacity or needs assessment,
conducting process evaluations, and providing
formative feedback to inform future stages,
and then a summative feedback process for
the outcome or impact evaluation. I think
at this point, we have another break for questions,
if you have any, David.
>> Oh, thanks for asking, I do have a question.
A lot of people come up to us and ask us,
we know that it's important to have a model,
but we don't really have a great sense of
how we should go about choosing one. Any guidance
that any of you have in terms of getting from
that 61 or more models to the one that you
would specifically choose for your study?
>> That's a great question. The first thing
that comes to mind is the work of Russ Glasgow,
[inaudible] and colleagues in Colorado, where
they have a website that is actually laying
out the 60 plus frameworks and organizing
them for interested users. And you can go
to this website and essentially knowing the
parameters of your study or your project,
you can begin to specify within the website
things you might be interested in and it can
help you actually choose your model. I think
the other thing I would say is that as we
go through the next types of models, that
the response that will become more clear,
but before we move on, Rinad, do you want
to add, and then, Byron?
>> So, I think for me, it's very much driven
by what my research question is, and so I've
used various models that differ across project
based on what kind of fits the research question
most appropriately. And I can talk about that
a little bit more once we have all the -- the
frameworks kind of laid out, but I think that
that seminal paper that Cara described in
2015 by [inaudible] Nelson really did us a
favor of the field by grouping the different
types of framework into those three categories
because each of them serves a particular function.
And so, often, I'll find myself using both
the determinants theory and an evaluation
framework together, and so I think being really
clear about what you need your framework for
and how it fits into your research question
really helps me guide collection,
>> Yeah, just two additional thoughts, I -- first,
I would echo I, really find that [inaudible]
Nilsen paper very helpful and increasingly
using multiple frameworks, which kind of complicates
things for -- for writeups, and both grant
proposals, and manuscripts, but I think being
very clear, as Rinad suggested, is really
important. And then second, I would port -- point
to some work that that we've done, Sarah -- Sarah
Birkin has led some work to develop a theory
and framework selection tool, it's not rolling
off my tongue -- tongue right now, but key
cast, and so we -- we sort of laid out some
different criteria that you might want to
consider in sort of four broad categories,
usability, testability, applicability, and
acceptability, and so there's a an open access
paper and implementation science, if that's
not in our -- if that's not in our resource
guide, we can certainly add that.
>> Great, thanks.
>> So, now, we'll spend a little bit of time
talking about determinants and identifying
and prioritizing barriers and facilitators.
So, you've probably already heard it in a
language in the way that we've been describing
that there's lots of words for determinants,
so these are broadly factors that might prevent
or enable improvements in practice, and you
can see here on the slide all of the different
words that people use. I also -- and Cara,
when she was speaking, referred to contextual
factors, from my perspective, this kind of
all comes down to context. So, just to take
a step back for a moment, picture yourself
as the principal investigator of a multisite
randomized control trial that's an advocacy
trial. Let's say that you start to identify
significant site differences in the effectiveness
of whatever intervention or efficaciousness
of your intervention. That is something that's
going to keep you up at night that you're
going to try to minimize all the noise that
comes along from being context. So, in implementation
science, instead of seeing context as noise,
we're really interested in understanding it,
and characterizing it, and there's a very
growing sense that that context is critical,
and that's really what we spend a lot of time
focusing on in implementation science. And
so, there has been a lot of work today to
really try to understand the determinants,
or the barriers and the facilitators of the
implementation process, and there's many ways
that you could assess for barriers and facilitators,
some of them are listed here. You could go
to the literature and see what's been done
already. There's no, you know, sense in inventing
the wheel if there's already been a pretty
in-depth contextual inquiry of the particular
setting and intervention you're trying to
implement. You can use informal consultation,
survey, interviews, ethnographic methods.
And increasingly, I think that the gold standard
is to use mixed methods, approaches, and trying
to characterize contexts, and understand barriers
and facilitators. I would say that at one
of the most robust areas within the field
are these determinant frameworks or these
contextual frameworks, which basically provide
a list of the various contextual factors that
you may want to consider during an implementation
process. One crowd fan favorite is the consolidated
framework for implementation research, which
was developed by Laura Damschroder and colleagues,
and one of the really nice things about the
CFIR is that it consolidates across a number
of theories and frameworks from multiple disciplines
to provide this consolidated framework and
list 39 constructs within five domains, so
these broad kind of buckets, including the
implementation process, the individuals involved
in the implementation process, the characteristics
of the intervention, the inner setting, which
refers to the kind of organization or the
setting in which you're trying to implement,
and then the outer set -- setting, setting
which refers to the broader -- broader sociopolitical
context. And actually, CFIR has a really wonderful
website that's kind of constantly evolving,
cfirguide.org, where you can kind of see the
latest and greatest thinking and the consolidated
framework for implementation research. It's
been a great addition to the field and -- and
Laura continues to kind of push the envelope
and thinking about how to use the CFIR in
guiding implementation strategy selection.
One thing that can be a little bit overwhelming
for folks newer to implementation science
is knowing which of these constructs to prioritize
because it's 39 and much, much more than that.
And the other challenge I think to -- the
growing list of determinant frameworks is
that there's no specific causal relationship
specified between the various contextual factors,
which can be really hard when you're trying
to move towards a mechanistic understanding
of how implementation works. Another framework
the theoretical domains framework is -- is
another example of a determinant framework,
I would say that TDF, which was developed
by Susan Michie and colleagues and there's
a second iteration that has just been published
I believe in implementation science as well,
it's more focused on the individuals doing
the implementation, whereas CFIR includes
more of an emphasis on some of the organizational
and outer setting factors involved in the
implementation process. So, you can see here,
again, a number of domains which TDF suggests
that you consider about the individuals involved
in the implementation process, such as their
knowledge, their skills, their intention,
their goals that might impact their ability
to carry out implementation. Yet a third determinant
framework, EPIS, which we've already described
before, is the one that I selected in the
project and the case study that I described
earlier in this presentation. And as Cara
alluded to and mentioned, the -- the really
wonderful thing about the EPIS is that it
acknowledges the multilevel process of the
implementation process and that there might
be different contextual factors that are important
in each of those stages. I, personally, selected
this framework to guide the observational
work that I presented on earlier because it
was initially developed for public service
sectors and so it has a careful eye towards
the contextual factors that might be important
for public service sectors and the work that
I was describing was in a public mental health
system so it was particularly appropriate.
I'll just close the section by noting that
I think there's been a bit of an evolution
in our thinking in implementation science
where earlier on, there was a real focus in
trying to capture context and a lot of work
identifying the barriers and facilitators
of implementation both qualitatively and also
increasingly quantitatively, and now there's
an increasing interest in understanding the
relationships between those lists of contextual
factors listed in the determinants frameworks.
So, we just shared this paper as an example
of a paper that endeavored to seek an understanding
of the relationship between molar organizational
climate or this kind of general organizational
constructs, which refers to shared perceptions
of how the work environment impacts the psychological
wellbeing of the employees, and then strategic
implementation climate, which is much more
proximal to implementation and refers to the
general sense of whether or not an evidence
based practice is expected supported and rewarded
in an environment. And so, longitudinally,
in that case study, that -- that we've been
talking about, we were able to look at the
interaction between that general organizational
construct and the more proximal specific implementation
construct. And we were able to see that they
were both very important. So, in organizations
that had high molar climate at baseline and
high implementation climate, we saw increased
use of evidence based practice over time,
whereas in [inaudible] organizations with
lower general molar organizational climates,
we saw no real such relationship. So, it's
a really exciting time to be thinking about
a mechanistic agenda and Cara Lewis is definitely
leading the way in helping us think through
how to start elucidating those mechanisms.
So, I'm going to pause here and ask David
if he has any questions.
>> Sure, I actually do.
>> So, if -- as you look across the different
models, we often see so many different barriers
or facilitators that are listed, and it strikes
us that it can be hard to try and identify
which of those many barriers and facilitators
are important within the context of any particular
study. How do you prioritize among what could
be a pretty long list of those different barriers
and facilitators?
>> It's such a great question and I can't
tell you that I have the answer because I
think as a field, we're still trying to figure
that out. And some of the work that I've been
doing, I've been applying a method called
intervention mapping, which involves review
of the literature, also stakeholder input,
and expert input into trying to prioritize
which determinants to target with our implementation
strategies, and I found it to be a useful
approach, but I do think that there -- it
-- it -- it can be difficult to know which
determinants are most important. And sometimes
we hear from our stakeholders -- and we haven't
really talked about this but we will and in
subsequent sections, the importance of the
stakeholder voice in the implementation process.
I think, as a science, we're still figuring
out, how -- you know, if -- if stakeholders
feel a particular determinant is very important,
will that bear out empirically, and so, I'm
still struggling myself with figuring out
what to prioritize and how to pry ties. But
intervention mapping has been a helpful way
to start thinking about that for me.
>> So, I think Rinad touched on some things
that are -- are interesting also with respect
to timing of when we assess barriers, so I
think that comes out in two ways, one is sort
of in terms of stakeholder perceptions, sometimes
we might assess barriers before an intervention
is introduced and so what we end up with is
sort of perceived barriers and not necessarily
real or experienced determinants of implementation.
And another thing I think with respect to
timing that's interesting and coming up increasingly
I think as we think about tailored implementation
is, you know, rather than thinking about assessing,
you know, barriers at the beginning of a project,
at the end of the project, can we -- can we
develop ways to iteratively assess barriers
and, you know, adaptively address them over
time, knowing that different determinants
might be more or less relevant across different
phases of implementation. And in fact, you
know, even -- even addressing certain barriers
may actually reveal a whole new set of problems
that we hadn't anticipated and so we've been
trying to think in that area more and I'm
working with someone [inaudible] at the University
of Toronto to think about sort of how we can
more systematically assess barriers and sort
of addressing this notion that, you know,
the question of how and when we assess barriers
is not necessarily pass�' and that sort
of figured out yet by the field, so hopefully
we can push the envelope in that space a little
bit.
>> I would just add a couple of things, one,
it seemed like the first bolus of work that
came out of this generation of implementation
science was what are the barriers and facilitators,
and I mention that just because I think that
we're sitting on top of a pretty robust literature
across a variety of evidence based practices,
and settings, and sectors, and things like
that from which we should draw, so I know
Rinad mentioned that in her response that
intervention mapping has -- encourages looking
to the literature, and I just wanted to underscore
that, start there. The other thing is, I think
this also came out with what you said Rinad,
but to underscore again, the intervention,
itself, would highlight our point -- point
toward probable barriers and also strategies
that should be brought to bear. And then I
think that where there's real important methodological
work needed is once you've done your systematic
assessment of barriers is this prioritization
base and Rinad noted that stakeholders might
have opinions about this, but I get concerns,
if we just prioritize barriers that are perceived
to be important and feasible by our partners,
we might actually miss those that are critical
to -- to the implementation success based
on, for example, theory. So, I think bringing
to bear our best knowledge, theoretically,
and also from our partners, and figuring out
a systematic way to then prioritize the outputs
from the -- the barriers assessment is important
work for the field to tackle.
>> Great. Thanks. So, moving on to implementation
strategies, or as we already mentioned, sort
of the how of implementation. We've defined
implementation strategies as methods or techniques
used to enhance the adoption implementation,
sustainment, and scale up of a program or
practice. And in our work, we've differentiated
between different types of implementation
strategies, so we often refer to discrete
implementation strategies, which we define
as single actions or processes used to enhance
implementation, so these are things like reminders,
audit and feedback, clinical supervision,
a training workshop, and the like. Often given
the myriad of barriers that we face in implementation,
many of which Rinad just highlighted, we need
multifaceted implementation strategies, and
sometimes even multilevel implementation strategies
to address them, so multifaceted strategies
that combine multiple discrete strategies.
Also note here that some of these strategies
have actually been, you know, protocolized,
branded, and tested using rigorous trials,
so a couple examples that I frequently give
are our Charles Glisson, availability, responsiveness,
and continuity intervention, which is really
designed to improve children's mental health
services and implementation clinical outcomes
and children's mental health services through
improving organizational culture and climate.
Another example we've mentioned Gregg Aaron's
work in relation to EPIS, Greg has developed
a leadership and organizational change intervention
that really targets specific leadership behaviors
that we think are related to improving implementation
and clinical outcomes. So, some of these more
complex implementation strategies are actually
combinations of discrete strategies that our
protocolized and tested as a package. This
is -- a number of years ago now, we published
from a structured review of the literature,
a compilation of discrete strategies. And
so, we originally turn to the literature to
ask the question, which strategies are effective
for implementing evidence based practices,
and -- and -- and maybe more specifically,
what are the evidence based strategies. And
when we first went about that task, one of
the things that we found was just that the
literature itself is a bit of a mess, that
people use the same term to describe very
different strategies, or use different terms
to describe basically the same strategy, and
the -- it was very inconsistent across different
published articles. And so, we sought to consolidate
that literature a bit and at least provide
terms and definitions for these implementation
strategies that -- that hopefully we could
have some level of agreement on as a field,
and this is sort of been an ongoing process
in that paper. We categorized strategies into
these six different broad categories, so some
of these strategies focused on planning, some
education, some financing, some restructuring,
either the physical environment, or clinical
teams and the like. Some focused on quality
management or quality improvement drawn from
that literature. And a few focused on attending
to the policy context. And the idea here was
that we wanted to give people a compendium
of strategy that they could use in practice
as they were designing improvement and implementation
efforts or to -- to individuals and groups
who were designing implementation trials so
they could clearly define the component implementation
strategies that they were using. So, based
on the results of that systematic review,
we were -- we were approached by colleagues
from the VA, so Joann [inaudible] and Tom
[inaudible], who led this expert recommendation
for implementing change study. And the first
step of that study was actually to that -- these
terms and definitions from the original paper
by a group of implementation practice and
research experts. And so, we engaged first
in a three round Delphi process, and then
later, a concept mapping study, which -- which
basically allowed us to vet these strategies
and -- and come out with a refined compilation.
And then categorized the 73 implementation
strategies into distinct categories, and those
are both open access publications, so I just
note that the -- sort of the most complete
version of strategies and definitions is actually
hidden in the additional file of that 2015
publication if you want to track it down.
So, that work actually doesn't explicitly
reference the evidence for implementation
strategies, but we wanted to note that there's
really a growing body of evidence for specific
strategies, the Cochran effective practice
and organization of care group or EPOC has
-- has really been a leader in this regard,
and I won't go over these strategies one by
one, but this gives you a sense that for some
of these strategies, we actually have a great
deal of evidence and at least, a great deal
of -- a great number of trials where they've
been tested. And I think audit and feedback
-- which Cara mentioned earlier, we're actually
auditing measuring performance and feeding
it back often in comparison to peers has 140
randomized trials, but there are still international
efforts ongoing to actually optimize audit
and feedback, so despite this growing evidence,
we still have a lot to learn about, you know,
when, where, how, and why these strategies
are actually effective. So, given that sort
of the evidence base is imperfect, and we
have this compendium of implementation strategies,
how do we actually design and tailor implementation
strategies so that they're effective? And
this is perhaps an overly simplistic way of
thinking about this, but this is an example
where we would hope that, as we do our context
assessments and we identify potential determinants
or barriers and facilitators, that our implementation
strategies would be well masked to address
them. So, for instance, if we identify lack
of knowledge as a problem, it may be appropriate
that we have some sort of interactive education
session to address that. If there's a perception
of reality mismatch, so maybe clinicians say
that we're already doing evidence based practices,
we may actually want to assess whether that's
the case by administering a fidelity measure
and feeding that back to them in comparison
to their peers. If there's a lack of motivation,
it may be that we need to develop some sort
of incentive structure to address that, and
so on and so forth. And so, this slide demonstrates
how we might think of a single determinant
being addressed by, you know, a single strategy.
Oftentimes, we know that this is more complex,
we might have multiple strategies that address
a single determinant. We also might have one
strategy that addresses multiple determinants,
and so this is just a another perhaps simple
example, but demonstrating how we might construct
multilevel strategies, and this is drawn from
a great paper by Brian Weiner called In Search
of Synergy, this was in 2012. And -- and he's
here pointing to multiple types of interventions
that hopefully will address physician motivation
and women's knowledge and also improve provider
patient interactions, so in this case, he
has a healthcare collaborative, which we can
think of as an organizational level intervention.
And it's a strategy to address provider communication
at the interpersonal level. And then education
and counseling for women, which we can think
of as an interpersonal level intervention,
all of which, again, hopefully improve physicians'
motivation women's knowledge, improve the
provider patient interaction, and ultimately,
enhance cervical cancer screening rates. So,
typically, we're putting together discrete
strategies into multilevel or multifaceted
strategies and -- and hopefully we have some
sort of model about how and why they're supposed
to work, which we'll talk about in a few slides.
So, this slide I guess is fair warning, this
might seem like we're beating up on the field
too much, this is -- this is -- I think reflects
our growth as a field and also, the -- the
uncertainty of the evidence base and some
of our theories, models, and frameworks, and
guiding this work, but unfortunately, rather
than sort of systematically developing and
applying implementation strategies, we often
gauge -- often engage in these -- these other
ways of working, one of which is the train
-- the train and pray or train and hope approach,
in which we send clinicians and other stakeholders
to training and we hope that their behavior
changes. We know from a robust body of literature
that that's probably necessary but not sufficient
to change implementation and clinical outcomes.
Another, again, perhaps too pejorative [inaudible]
thing is that we often rely upon the kitchen
sink approach, which is that we throw multiple
implementation strategies at a problem, thinking
that sort of more intense is better, or more
is better. And actually, there's some interesting
findings in the literature that suggests that
this is not the case, that multifaceted strategies
are not necessarily inherently better than
single component strategies. So, referencing
Janet Squire's work here from 2014. And there
are a couple of plausible explanations for
this, one is that we might not always have
a great a priori rationale for why we're using
multiple strategies and they may actually
not be very well matched to the determinants
in a given setting. And another plausible
example is that sometimes multifaceted strategies
may be sort of multifaceted, by definition,
but they actually may sort of focus primarily
on the same determinant or target. So, for
instance, we might have a multifaceted strategy
that really just focuses on improving provider
knowledge, it might not address other determinants
at different levels, such as the organizational
context or the outer setting, financing political
factors that may actually influence implementation
heavily. One might argue that we often also
use the one-size-fits-all approach too often,
that we think --we think about implementing
a specific intervention and we develop implementation
strategies that, you know, might be common
across different settings and systems when
we know that these are actually highly heterogeneous
and -- and may need more tailored approaches.
Admittedly, this is an area of ongoing work
and -- and debate in the field, how highly
do we need -- or how much do we actually need
to tailor our implementation strategies to
specific contexts. And then finally, my personal
favorite from Martin Eccles, who was one of
the original editors of implementation science
says that, we often are driven by the islagiatt
principle, or it seemed like a good idea at
the time, and I think this often looks like,
you know, inertia. Sometimes organizations
and systems might say, we've already -- we've
always done this this way and so we'll kind
of continue to use the same implementation
strategies. Other times, we're really actually,
you know, earnestly trying our best, but just
don't necessarily have the evidence theory
or systematic processes needed to implement
this -- implement well. So, Rinad, Cara, and
I and others have recently published this
paper in Frontiers in Public Health where
we talk about a number of ways in which we
think that we need to enhance the impact of
implementation strategies and I'm just going
to hit on each of these quickly. The first
is sort of building off that -- that previous
slide of the sort of maybe unsystematic ways
in which we've thought about applying implementation
strategies that we really need more work to
enhance methods for designing and tailoring
implementation strategies, and that starts
with what Rinad touched on and which -- what
we talked about during that Q and A session
is that we actually need better methods for
identifying and prioritizing barriers, and
that's I think a really important area we've
already highlighted. We need more adaptive
approaches, again, barriers changing over
time, across the course of implementation
efforts. So, how can we assess those in an
ongoing way and then adapt or tailor our implementation
strategies accordingly? And then finally,
once we've sort of identified the determinants
that we think are key, what are -- what are
some systematic and rigorous methods that
we can use to enhance the link between the
barriers and strategies because there's actually
some literature to suggest that even when
we try to prospectively tailor implementation
strategies, we often end up with a mismatch
between the types of determinants that we're
identifying and the types of strategies that
we're using. So, for instance, Bosch et al
found that -- Bosch et al found that we often
identify organizational level factors that
may be influencing implementation, but then
we respond by providing sort of more provider
training or individual level interventions
that don't address those structural issues
very well. So, as we think about enhancing
methods for designing and tailoring, we certainly
think that this should involve participatory
processes, as Rinad noted for the centrality
of stakeholder engagement in implementation
science, hopefully, we're drawing upon the
best of our theory to the extent that that's
helpful and hopefully, we're not only drawing
upon the best of our theory, but -- but doing
work that would build theory over time to
the extent possible drawing upon available
evidence. So, again, rely -- relying upon
the Cochrane group and others who have provided
excellent systematic reviews and meta analyses
of a number of implementation strategies and
building on that work and hopefully optimizing
strategies over time, and really developing
a thorough understanding of context before
we go about implementing or selecting imitation
strategies. So, Heather Cahoon at the University
of Toronto has a really nice systematic review
focusing on different methods for designing
interventions specifically related -- related
to changing healthcare professional behavior.
We've also suggested several methods that
could be used more systematically, design
and tailor implementation strategies. Rinad
has already mentioned the intervention mapping
as a promising candidate strategy there that's
getting increasing traction in the field.
So, a second priority for us is really specifying
and testing mechanisms, and so here, we're
thinking about not just whether or not strategies
work, but how and why they work. So, this
is drawn from a paper that Cara and colleagues
just published in Frontiers in Public Health.
And here, again, using the example from measurement
based care of wanting to increase depression
screening, so our distal and proximal outcomes
are actually related to screening, we might
think of a financial disincentive for missing
PHQ-9 as a potential implementation strategy,
but here, we're really trying to get people
to specify, what is that addressing. So, in
this case, the -- the effect of that intervention
might be mediated through clinician motivation
and we want to think through the potential
moderators and preconditions that may influence
the ability of that implementation strategy
to take effect, so what is the value of the
-- the disincentive? Is this something that
organization system -- systems actually have
the ability to change as a policy that they
have control over? Is a communication infrastructure
in place to which that -- that increased motivation
can actually lead to increased screening?
>> So, we were fortunate to receive AHRQ funding
in late 2018 to host a three year conference
series and partnership with SIRC, the Society
for Implementation Research Collaboration,
there are two aims that will guide our work,
the first being to generate a research for
-- research agenda for studying implementation
mechanisms that has inputs from empirical
work that has been conducted from our policy
and practice stakeholders, and the second
aim is to then disseminate that research agenda.
And we will accomplish this important work
through the convening of four work groups,
so you can see the four work groups on your
screen, and we have fantastic colleagues of
these work groups. And then the blurry map
illustrates what we're calling a mechanisms
network of expertise that started off as a
U.S. based group of individuals who will populate
the work groups, but we're currently growing
that internationally, and our first face-to-face
meeting will be, as I said in partnership
with SIRC, at the 2019 meeting here in Seattle,
September 12th through 14th and then we'll
meet again in 2020, just as a mechanisms network
of expertise to really flesh out that research
agenda. And then in 2021, we'll -- we'll disseminate
through SIRC and other mediums, so we invite
you to join us, if you're interested.
>> Great, thank you, Cara. And just to sort
of wrap up this section. So, whether we're
formally modeling the mechanisms of implementation
strategies or not, we're really trying to
encourage people to -- whether it's by -- by
the type of modeling that I showed on the
-- the previous slide or through more traditional
logic models, where we're -- we're talking
about how and why our implementation strategies
are working, hopefully we're increasingly
doing that so that we can accumulate evidence
over time. We also think that we need a lot
more effectiveness research, as I mentioned,
the evidence is growing, but we still need
to learn a lot about different discrete strategies.
We need to diversify the types of strategies
tested, so for instance, doing more at the
organizational system and patient level with
an implementation science. Then we need more
comparative studies, so for discrete strategies,
how can these sorts of discrete or single
component strategies be optimized. For multifaceted
strategies, we're increasingly seeing innovative
designs, such as sequential multiple assignment
randomized trials where we're thinking about
what is the -- the sequencing of implementation
strategies and how can we sort of start with
lower resource implementation strategies and
then adaptively apply more intensive approaches,
as -- as -- as needed. And then there's a
lot to -- that we have to learn about tailored
implementation strategies, as I mentioned
previously. So, first of all, are tailored
implementation strategy is more effective
than standard approaches? And if so, how can
we develop, you know, efficient and systematic
approaches to tailor more effectively? We
also think that there's a need for the utilization
of a wider range of designs and methods, and
so we conducted a review led by Steph Mazucca
at Washington University in Saint Louis where
we found that certainly, you know, RCTs and
cluster RCTs were the -- the most common type
of design used within implementation science
study protocols, but there's also a wide range
of designs and methods that we might apply
within implementation strategy studies, particularly
as -- as our CTs are not always maybe ethical
or feasible. Hendricks Brown and his colleagues
have done a lot of great work to highlight
those experimental and quasi experimental
designs and would urge you to take a look
at those as you're considering different possibilities
within your setting. So, we also have a lot
to learn about economic evaluations within
implementation strategies, and so this -- this
is a -- actually an old paper, there's actually
a more recent paper just published I think
in the last couple weeks that more or less
replicated this finding. But in a review of
235 implementation studies, only 10% provided
information about implementation costs, which
obviously severely inhibits our decision-making
with regard to which strategies might be most
feasible within an organization or system.
And so, we need to -- to increasingly think
about cost as we're -- we're thinking about
our implementation strategy studies. We'd
highlight some practical tools that have been
developed, so Lisa Saldana has developed the
cost of implementing new strategies approach,
which couples nicely with some work that she's
done to really document the different stages
of implementation. But we -- we need a lot
more work in this -- this front and potentially,
common frameworks for facilitating comparability
across different economic evaluations of implementation
strategies. Just this first figure is just
to illustrate that surprise, surprise, implementation
takes a lot of time and a lot of different
implementation strategies often, so as we
-- as we've tracked implementation strategies
over in research studies or in real world
implementation efforts, we find that it takes
a lot of -- a lot of person hours and a lot
of different strategies and we really need
to do a better job of capturing that so that
we can build in those costs in implementation
efforts and hopefully we can account for those
resources. And also just want to mention Todd
Wagner and colleagues at the Health Economics
Resource Center associated with the Department
of Veterans Affairs is starting to lead a
workgroup around economic evaluation and implementation
science and eager to see what that group develops.
And then finally, one of the challenges in
implementation science, I mentioned sort of
the -- the challenges with language, another
challenge is that we often have poorly described
strategies and so we're not doing a good job
of first tracking and documenting what we're
doing in implementation, and then describing
it clearly within the published literature,
which obviously limits replication, and science,
and practice, and really, ultimately precludes
many of these answers to many of these questions
we're -- we're pointing to, with respect to
examining mechanisms. So, we don't understand
how and why these strategies work. Actually,
a number of reporting guidelines exist and
I'm going to highlight one of them here that
[inaudible] Proctor, [inaudible] McMillen
and I have developed, where we really call
for people to carefully name implementation
strategies, hopefully in ways that are consistent
with published guidance, so whether it's the
-- the air compilation that I mentioned, or
oftentimes people use the behavior change
technique taxonomy or other published taxonomies
within the field of implementation science
to name a specific strategy really carefully
defining it. And then really, perhaps more
importantly, specifying and according to who
are the actors involved, what are the specific
actions that they're actually taking, what
are they targeting, this is another way maybe
of thinking about the mechanism, but the specific
action target that the strategies is meant
to address. Thinking about the temporality,
or timing, or sequencing, of how these strategies
are -- are put together, so in some cases,
we may actually need to, for instance, increase
provider motivation before we send them to
training, so increasing motivation for -- to
learn about a given EBP before we send them
to training, documenting that type of sequencing
is something that we're -- we're pointing
to here. And then what is the dose of the
implementation strategy? What implementation
outcomes is it really intended to improve?
And finally, what is there -- the empirical
theoretical or pragmatic justification for
choosing that specific strategy, so again,
really being -- being clear about why that
strategy was chosen within the context of
a given effort? And there's some practical
examples of people applying these types of
frameworks and basically, the intent here
is -- is not to be onerous, but to really
increase our ability to replicate these strategies
and research and practice. I think we have,
hopefully, time for questions and discussion.
>> And a question being teed up as we speak.
So, you talked a lot about the different resources
and the ability to really walk through how
to specify the different implementation strategies.
I wonder if any of you could give us a bit
of an example as to how. within the context
of a specific study, you thought about which
implementation strategies to use in your specific
work in that study. Any -- any examples?
>> Go ahead, Rinad.
>> Sure. Yeah. I -- so, I can't -- I can't
draw a parallel to the study that I was talking
about earlier because as I described, it was
very much an observational study where I got
to watch what a system was doing and the strategies
that they selected, but I can describe some
of the work that we've been doing around hoping
to implement firearm safety promotion interventions
in pediatric primary care. So, for that project,
we did a two year contextual inquiry where
we used mixed methods to try to understand
barriers and facilitators, use -- and we used
intervention mapping to develop a menu of
something like 9 or 10 implementation strategies.
And we just sat down to write a grant to start
testing the different implementation strategies
and to use all of them just felt very cumbersome.
It kind of felt like what Byron described,
like we were just going to use a kitchen sink
approach, and -- and that didn't feel like
the best use of our resources, and so we really
drilled deep into the theory to help guide
the four strategies that we selected. And
we wanted -- we're testing -- hopefully, we
will test them compared to implementation,
as usual, because, you know, we -- which we'll
have two of the strategies in the -- in the
bigger package, also because we don't want
to -- you know, we want to test it to a real
test condition. We don't want to end up in
a situation, we're doing something that's
better than doing nothing, so that's kind
of how we went about deciding. I don't think
you need necessarily that two year process
for every implementation study, but for the
particular case that I'm talking about, there
were some nuances to understanding how clinicians
and community stakeholders might feel about
doctors and nurses talking about firearm safety
with parents that required a little more contextual
inquiry.
>> I can add an example from a residential
treatment setting with youth where once we
-- similar to Rinad that are mixed methods
contextual inquiry and prioritized barriers,
so we drilled down from being like 76 barriers
to a top three and then the list fell in rank
order thereafter. We looked at the available
strategies and matched them, to the best of
our ability, based on thinking about their
mechanisms and having conceptual alignment
with the prioritized determinants, but also
having experts rate the degree to which it
was perceived that the strategy would have
a critical impact on implementing the evidence
based practice with fidelity. And then we
created these blueprints that walked across
the different phases of implementation to
organize the strategies in terms of temporalities.
Byron, were you going to add an example?
>> Yeah, not so much an example, just a comment
that I think and both Karen and Rinad have
done work like this too. I think increasingly,
I'm trying to think about developing and testing
different methods that would accomplish exactly
what you're asking us, how do we actually
select these implementation strategies? So,
you know, how do we, for instance, apply intervention
mapping within community context and get -- get
organizations to be able to use that to basically
tailor implementation strategies to their
specific needs? So, I just wanted to signal
sort of a need for -- I mean the question
you asked is a -- is, of course, a practical
consideration for anyone implementing intervention,
and as well as researchers posing studies
-- proposing studies, but -- but there's also
a room I think for a lot of methodological
development here and hopefully, we're pointing
to the -- to the fact that we don't have this
all figured out yet and I think we need some
creative and rigorous approaches to do that.
So -- so hopefully, we have more and more
people that are explicitly trying to answer
that question through message development.
>> Recruitment for -- for folks who are listening
in to join the -- enter the fray. Sorry, Rinad.
>> No, I was just going to share, this is
so kind of hot off the press that I don't
-- I wasn't sure if I'd say it. But we are
at piloting the use of this method called
best word scaling, which is a type of approach
to elicit stakeholder preference that's in
the discrete choice experiment family, where
we're basically force -- forcing a choice.
You know, you give eight or nine strategies
to a stakeholder and say, which one is the
best and which one is the worst, and then
they have to pick because in our previous
work we've been using Likert scales and everyone
kind of likes everything and says everything
is important, so it's been kind of hard for
us to suss out what we really should be focusing
on, at least from that community stakeholder
lens. And so, I'm really excited we're doing
this work as part of our NIMH Alacrity and
we've gotten a great response rate to a survey
using it and people have been sharing that
they -- they're really enjoying it. So, hopefully
in the next six months or so we'll have some
interesting results to share with the community
about this best word scaling approach.
>> I think that the -- the general message
here is just to have this intentionality as
you go into this phase of the work and there
are different parameters that can inform how
you approach the selection and tailoring and
I guess this makes sense, but at the time,
I was surprised when implementation practitioners
were remarking on how the causal pathway diagramming
-- and Byron showed you a picture of that
-- was particularly useful in the selection
process for them of strategies because it
just forces that intentionality to instead
of just going toward the implementation strategies
you're used to doing, like we always do training,
we always do audit and feedback, we always
do learning collaborative, thinking really
carefully about how do they work and what
are the barriers that were faced with. So,
with that, I will have us switch gears. We're
moving into the section on evaluating implementation
efforts. So, we'll kick this off with one
of the most widely utilized evaluation frameworks,
known as RE-AIM, reach, effectiveness, adoption,
implementation, and maintenance by Russ Glasgow
and colleagues, I think that the citation
got removed from this slide, but there's just
been an enormous amount of work applying the
RE-AIM framework to hybrid and implementation
studies, and there are some useful web-based
resources as well. And then this framework
was also shared earlier, this is by Proctor
and colleagues first in 2009 and then updated
in 2011, differentiating implementation outcomes
distinctly from service and patient outcomes,
and so there are eight implementation outcomes
here. And there have been some efforts, including
in the most recent edition of the Dissemination
Implementation Research and Health Book edited
by Clovis Brownson and Proctor in a measurement
chapter there to indicate in what phases of
implementation these particular eight outcomes
are most salient, so to help point toward
when you ought to be assessing each of these,
and then trying to draw some relationships
between them, so there's some evidence that
suggests that acceptability, feasibility,
and appropriateness are predictors of adoption.
For example, so in an exploration phase of
work, you ought to be or you might want to
be assessing acceptability, feasibility, and
appropriateness, for example. And so, in -- in
the measurement space, our team has done a
lot of work that was started with SIRC, the
Society for Implementation Research Collaboration,
to try to wrap its arms around the quantitative
measures of implementation outcomes and beyond,
so we've actually tackled all of the CFIR
constructs as well, and we've just completed
systematic reviews of all of the CFIR constructs
as of last week, and we're now finishing up
an update to the measures that people are
using to assess implementation outcomes. So,
in 2019, we'll be able to help the field through
sharing of repositories to understand what
measures are valid, reliable, and pragmatic
for assessing implementation, constructs,
and outcomes. And our team also, through an
NIMH funded RO1 developed three new measures
of implementation outcomes and the citation
is there for you. This has been mentioned
several times throughout today, that there
is a tendency toward bringing qualitative
methods to bear and our implementation research
and mixed methods as well, this -- there are
really strategic reasons for using each of
these approaches to complement or instead
of quantitative approaches and I really appreciate
the taxonomy that Larry Pincus and colleagues
in your bottom-left corner published in 2011,
helping us think about structure, function,
and processes for mixed methods work because
it indicates, for example, if you're trying
-- if there are no quantitative measures that
exist or none that have evidence of reliability
or validity, it might be absolutely appropriate
to bring a qualitative method to bear, or
perhaps you're using quantitative methods
for the purpose of sampling to then use qualitative
methods in a mixed methods approach to -- to
dive deeper and explore some of the constructs
of interest. So, hopefully what you're hearing
in my description is that mixed methods really
means that there's integration, it's not simply
the --the use of quantitative and qualitative
in the same study, it's an intentional integration
of the two different methods. And then to
help us further, there's been some really
important work published around study design,
so given the settings that were naturally
working in to explore implementation processes
and outcomes, it's not always the case that
we have the luxury of doing, you know, a five-year
randomized trial and so we need to understand
what other study designs are available to
us that will allow us to address the implementation
research questions that get us started, so
I encourage you to look at these articles,
for example, and others in your resource guide
for help with that. In terms of study designs,
perhaps one of the most pivotal pieces of
work for our field was the offering by Geoff
Curran and colleagues on hybrid trials, and
so we'll break that down a little bit further
here, acknowledging that there is a lot of
time between efficacy, effectiveness, and
DNI studies. Curran and colleagues really
honed it on this gap this time between an
effectiveness and DNI study, where effectiveness
studies are indeed happening in the intended
settings, often with the intended provider
and the diverse range of patients that are
being targeted, but still with incredible
resources brought to bear and control of the
contextual factors. And what they're suggesting
is that maybe if we're conducting hybrid designs,
hybrid effectiveness implementation studies,
we can actually speed things up a little bit.
So, they articulated three types of hybrid
studies that you can think about as sitting
on this continuum from emphasizing effectiveness
on the left to emphasizing implementation
on the right and sitting squarely in the middle
are studies that have co-primary aims of implementation
and effectiveness research [inaudible]. And
so, Rinad, do you want to say a little bit
about your P2I evaluation?
>> Absolutely. So, we're going to bring it
home now and try to put a bow on it and talk
you through how we used evaluation frameworks
in our two case studies. So, I will briefly
describe how I thought about evaluation for
the naturalistic observational study that
I did in the City of Philadelphia. So, just
as a reminder I was very interested in the
question of whether or not useless cognitive
behavioral therapy, or the evidence based
practice that we were interested in, increased
over the five-year period in response to the
creation of a centralized infrastructure at
the system-level. And so, when thinking about
the implementation outcome that I was kind
of primarily interested in, it was really
about use of CBT strategies and I'm careful
not to use the word fidelity, which have a
particular meaning because we were looking
across treatment strategies. And so, in this
particular study, we use self-reported use
from clinicians of the strategies that they
used with a representative client using a
measure that is established in -- in our field.
And of course, there are limitations to using
self-report when asking clinicians to describe
what they did, but within the resource constraints
of this particular study, that was what we
were able to do. The second set of questions
was related to which contextual factors were
related to clinician behavior in a large system
implementing evidence based practice. And
so because we were using the EPIS to guide
our understanding of contextual factors, we
drew the factors that we thought might be
most important and fortunately, for these
constructs, there were a number of existing
measures that had kind of just been released
-- at the time of the start of the study,
particularly the implementation climate scale
and the implementation leadership scale, which
are measures that Greg Araz and Mark Earhart
[assumed spellings] developed right around
the time when we were rolling the study out.
So, we were able to use quantitative measures
to capture a number of the determinants that
we were interested in. And we were interested
in looking at the relationship between those
constructs and the primary outcome of interest,
which was used with CBT strategies. And drawing
on what Cara said, because mixed methods and
qualitative evaluations are so critical and
implementation science to give a richness
to understand findings, we conducted qualitative
interviews using the EPIS to guide the questions
that we asked and also our coding scheme to
understand stakeholder perspectives on barriers
and facilitators to implementation.
>> Going back to the case study that I started
us with, this -- this was -- the first aim
was to compare the effect of standardized
versus tailored measurement based care implementation
on both clinician and client level outcomes,
so inherent to the tailored condition, clinics
were bringing tailored strategies to bear,
so it was important for us to be able to capture
that work and do so in a way that aligned
with reporting recommendations. So, we established
a method -- a coding method of implementation
team meetings that could be used to capture
and then report out on implementation strategies
that were discussed in those that were deployed
each month so that we could then link to see
if certain types of strategies, or categories
of strategies, or number of strategies influenced
fidelity to measurement based care. And I
encourage, wherever possible, especially if
you're able to validate these measures, to
use administrative data, so data that's already
being captured, either by the electronic health
record or for reporting purposes for policies
or accreditation systems, and so we were fortunate
to be able to embed the fidelity measures
into the electronic health record, so the
three elements of fidelity being administer
patient report measures, review the measure,
and then discuss the scores, where a mix of
both objective data capture and self-report.
So, in this case, the administration of the
patient health questionnaire or depression
severity measure, that was captured objectively
with the patient scores. Review of the graph,
so the score trajectories over time was also
objective, we could capture if a provider
actually opened up the graph and took time
to review it. And then the discussion and
how it impacted treatment was a self-report
question embedded into the progress note for
the provider. And -- and then from there,
the second aim of our trial that I didn't
introduce you to before was to identify contextual
mediators of measurement based care fidelity
and we hypothesized that in the case of tailored
implementation, clinics would actually be
improving contextual factors that would then
improve measurement based care fidelity because
they were actually trying to bring strategies
to bear that would address contextual determinants
as they went along with their implementation,
whereas in the standardized condition, it
was so focused on implementing measurement
based care with fidelity, so it was in the
room fidelity, what the provider was doing
with the patient, they weren't targeting their
attention to these contextual factors, and
so we thought that those factors would remain
present and problematic in the standardized
condition. So, in the -- in the -- the contextual
factors that we were interested in looking
at, we didn't always have quantitative measures
that were available for administration to
assess these constructs. In some cases, we
did. In other cases, we didn't. And in other
cases still, it's simply not appropriate to
be inquiring about people's attitudes, perceptions,
or beliefs, but rather, you're trying to get
at more objective data or data from a different
source, and so we have a mix of qualitative
and quantitative measures of these contextual
factors, and that's all I'll say about the
measurement based care evaluation and see
if David would like to feel -- or ask some
questions that we can field.
>> Sure, I would love to. Thanks for the opportunity.
You said a lot about within the context of
-- of these two studies and just overall about
how many different measures, how many different
constructs might be useful, I'm wondering
from the perspective of somebody who might
not consider themselves primarily focusing
on implementation, how would you incorporate
some of these evaluation methods into studies
that might be earlier in the continuum, whether
we're focusing on an efficacy trial, or you
mentioned hybrid effectiveness implementation,
are these the kind of things that you would
envision being useful components of those
earlier intervention studies? Yeah, so 
how might I -- if I'm doing an effectiveness
trial or even thinking about efficacy, how
might I integrate some of these implementation
science constructs measures into my study,
should I? And -- and how might I do that?
>> Yes. Well, it depends on when you begin
thinking about this, and if you have the funding
to do so. But I think it also depends on what
you think the real empirical question is for
the work that you're doing, of course. So,
some folks are just really going to be quite
interested in understanding what got in the
way of their intervention being delivered
with fidelity or being scaled and so you're,
in that case, going to be exploring barriers
and facilitators, and you might have a narrow
set that you're focusing on because you have
some hypotheses predictions about what it
will get in the way. I -- I, personally, think,
at this point all effectiveness studies should
be hybrid type ones at the very least, I -- and
what that means in my perspective is that
we're capturing implementation process information,
what did you do to help the intervention get
implemented because without that information,
you'll essentially have to recreate the wheel
if you find out that the intervention was
effective to figure out then how to get it
implemented. I welcome my colleagues to add
to that response.
>> Not surprisingly, I agree completely with
Cara and her response. I think I've been very
excited to see how many folks who are interested
in doing more traditional intervention efficacy
and effectiveness work have been coming to
me and asking me, how do I infuse some aspects
of implementation science into my work and
I give them an answer that's very similar
to what Cara said. I think even earlier in
the pipeline, there is thinking around how
to design for dissemination and implementation
and making sure that your interventions are
kind of going to be implementable when the
time comes, and including stakeholders from
day one, and developing the interventions
for the places in which they're meant to be
implemented. So, I also think there's a tremendous
opportunity early on in the intervention development
process to be thinking with an eye towards
implementation and I think that's really exciting.
>> We then have a list of other questions
that we could consider, so advice for someone
starting off in the field, my thinking would
be that join some networks or get on some
newsletters. So, David would you say something
about the NCI Network and what their offerings
are?
>> Sure. I mean we have a monthly newsletter.
We have -- you know, we do a lot through social
media at [inaudible] is the Twitter handle,
and they're always trying to make sure that
people are aware of recent -- upcoming papers
or recently -- recently published papers,
upcoming conferences. We do monthly webinars
and so folks can certainly follow up with
me or our colleagues at NCI if they're interested
in joining those lists.
>> So, we -- we have something -- I hope I
don't butcher the URL, but Implementation
Science News, if you Google that, this is
an effort through the Consortium for Implementation
Science, which is RTI International and UNC,
and that's a great sort of collection of resources
about training, publications, and the like.
And -- and the other thing that I'll add just
in addition to sort of networks is, there's
really been -- there's just a ton of good
webinars and other resources on the web, so
the Prevention Science Methodology group has
a great series where they feature some -- some
really high level methodological talks and
implementation science and, you know, the
NCI group, the VA Cyber seminar group has
a -- has a lot of implementation related webinars.
And so, there's really I think, in recent
years, more and more resources for people
that are just starting out and want to kind
of dip their toe in the water before they
jump into some of the more intensive training
institutes or formal coursework and the like,
so I think that's a great -- a great thing.
I realized that I sort of jumped us to networking
and connecting, but let's stay there and,
Rinad, do you want to add the offerings through
Penn?
>> I can, although I might answer this question
a little bit differently. For me, I think
the advice that I would give someone starting
off in the field is really about seeking out
informal and formal mentorship in the space,
so I completely agree that all these various
resources are tremendous ways to quickly get
up to speed on implementation science, what's
been done and what the future looks like.
But speaking from my own personal experience,
the way in which I think one kind of really
goes all in having both the formal opportunities
that are offered through the NIH and various
universities and then, you know, really apprenticing
or working closely with an implementation
science expert, there's no -- there's no substitute
for that.
>> Do you want to mention any of the Penn
offerings or should I?
>> Sure. Yeah. We have a three day at deep-dive
implementation science institute that we offer
annually and each year, we've been fortunate
to have keynote speakers join us from the
NCI. The first two years, it was David. This
year, we have [inaudible] Norton joining us
and it's a really good way to kind of jump
into the deep end of implementation science
and terminology to kind of jump start, you
know, moving into that space.
>> And then I was going to add that through,
SIRC, the Society for Implementation Research
Collaboration, there is a resource a reading
list for folks new to D and I in our initiatives
tab on the website. And then in addition to
that, SIRC can join a network of expertise,
depending on their level of training and familiarity
with implementation science, and so folks
who might not have PI'ed, an RO1, for example,
in an implementation science way, they could
get matched up with a an expert network of
expertise member for mentorship there and
it's free for SIRC members to -- to -- to
obtain that. I feel like we addressed the
first two. If folks are okay with that, I'll
move on to how to build capacity at your institution.
So, my quick response would be to say, my
approach has been to be hosting journal clubs
and engaging those with the energy and interest
in proposal development and giving them pieces
of implementation science proposals to lead
or co-lead and then iterate with me, so that's
a couple different ways that we've approached
this. Byron, what would you say about this?
>> Yeah, so I mean, first of all, starting
with the -- the resources and infrastructure
you already have, so -- so many CTSAs now
include, you know, some sort of dissemination
implementation research core, or groups that
are tasked with providing consultation around
implementation science, and so there are a
number of those groups at UNC, and at Wash
U, and certainly, Rinad will talk about what's
available at Penn there. So, I think working
to do that sort of through the formal channels.
I think there are a whole host of people that
are interested in this area from across departments,
I think the challenge is almost, you know,
bridging sort of those departmental and school
divides and sort of speaking the same language
and -- and doing some translational work to
sort of identify the common areas of interest
and problems that you're addressing. But I
think there are all kinds of models for -- as
you mentioned, journal clubs, other sort of
more flexible groups that are working on grant
review, paper ideas, applied implementation
efforts, and so I think there's a number of
opportunities to plug in -- into. And I guess
I'd just add -- just sort of bouncing back
to the other question, but found the implementation
science community overall to be extraordinarily
collaborative, generative, and just a friendly
and fun group to hang out with, so I guess
the piece of advice is, don't be afraid to
email people and reach out. You know, if you're
at a conference and you want to connect with
someone, I think it's a really welcoming and
fun group, so would welcome you to do that.
>> Rinad?
>> So, yeah. I've been very fortunate to be
at a university that has recognized the importance
and the promise of the field of implementation
science. So, when I join the faculty here,
there was already the rumblings of two initiatives,
which have now kind of turned full-blown into
an implementation science community, one was
a working group that was convened by the Leonard
Davis Institute of Health Economics to bring
together an implementation science community
a little bit more informally and just in the
types of meetings that you all -- you both
have described, really focused around proposal
development and technical assistance around
implementation science proposals. And then
the other is a course that was developed through
our masters and health policy that's a semester
long course focused on implementation science
in health and healthcare, which then gave
birth to the institute because there was so
much demand for training and implementation
science so we now have a center, those -- those
various initiatives have been turned into
a center, the Penn Implementation Science
Center [inaudible] the Leonard Davis Institute
and it's really a beautiful thing to see the
kind of community coming together. And like
Byron described, you know, we have folks from
surgery, psychiatry, internal medicine, and
it's really organically evolved into a community
of people who are really committed and passionate
about reducing the research to practice gap
and training the next generation of implementation
scientists.
>> And -- and if I can add quickly, Rinad
reminded me, sort of the -- the course structure,
I think we're finding increasingly that we
have need for a sort of, you know, PhD level
or research focus classes and probably increasingly
more specialized classes focused on implementation
science, as well as the need to train people
that are going to be more implementation practitioners,
so like MPH types or MSMUs and the like that
that might be doing this work and need to
apply these skills but may not be interested
in the theories and methods as much.
>> Right. David, would you like to add anything
to that question?
>> I guess the one thing I'd say is that each
of the three of you are at institutions where
there's been a reasonable amount of support
for this work for a while and if people are
listening who may not necessarily see that
as transparently at their institution would
encourage people to look beyond some of the
verbiage. You know, so whether somebody is
specifically laying down their, you know,
sort of signpost as an implementation scientist
or is simply interested in understanding and
trying to improve practice, I think there's
been a lot of -- as you know, was said as
part of this training, a lot of different
fields that have really been thinking about
these issues for -- for decades, in some cases,
centuries, and so I would look for other colleagues
who are each maybe in in disparate areas around
health, or even -- you know, other sort of
social services who are all interested in
trying to understand how work is delivered,
understand how quality care emerges, and understand
what can we learn together about how to improve
what's being offered to, you know, people
sort of across the lifespan and different
social service agencies in health and in healthcare.
>> Great. I'm not sure we have time for the
last question, so instead, I would invite
you to find us at the next DNI conference
and we can discuss over coffee, how does that
sound, friends?
>> Sounds great.
>> Good. David, do want to wrap us up with
any final thoughts?
>> So, we -- you know, we started this out
because we recognize that there's so much
interest in the field and there isn't always
a freely accessible way for people to get
oriented and so we've been very appreciative
of -- of Cara, Rinad, and Byron's interest
and willingness not only at our last in-person
conference, but to provide this training online,
so that even if you can't make it to the 12th
Annual, which we'd love to see at, you'd still
be able to get a good orientation and hopefully
find other like-minded folks to help us move
the field forward. So, my huge thanks to the
three of you for -- for spending the last
couple hours with us and for providing this
kind of resource for anyone who wants it.
>> Our pleasure.
>> Thank you.
>> Thank you for having us.
