DAVID: So, thanks, very much this whole thing
started with a question, I was sitting in
Bob's office and he said, you know, would
it be helpful to have a consortium?
That was the question, and we thought about
it for about a year and ultimately, because
we have other examples before us within the
division, a various consortiums we thought
intuitively that it would make sense to, above
all of the other things that Bob has said
and certainly the activities at all, just
quickly remind you of.
We really didn't have a forum for, which we
could try to again think as a group about
what we're missing.
And so it really is an opportunity.
And thank you all for joining us in person.
Thanks to the many of you who can't be in
this room, but are joining us remotely?
This is really about and hopefully you'll
find this over the next two and a half days
about getting as much from our group, this
group as possible to help guide us as to where
do we want to go next?
You'll see the continued scroll of various
or answers or responses to this initial question.
Why did you individually choose to be part
of this meeting today?
Great to see Bob does rock.
You can see right there and he's funny.
Hilarious.
I think someone else…IS does rock.
That's great too.
And, you know, we really this is going to
succeed.
We expect on the basis of your participation
throughout on your interest in helping us
think through where do we need to go next.
Okay.
And so, thank you to the seventy-six of you
who have already chimed in on this particular
question.
There'll be more to come, and we are capturing
all of this information because we need it
and because we want it and so you'll see reflected
over the course of the next two and a half
days.
A lot of the things that you're offering for
us to reflect back to you, are these things
that we need and how do we together work through
that specific set of activities in particular
areas?
But what should we do?
How can we at the NCI help to pave the way
for more and more and more, which is what
we want to see.
So, I'm going to do alt+tab.
Hey, it worked.
And now we're in now, we're in sort of intro
mode and so you'll see, this will echo a number
of the things above it.
Just said you already many of you are ahead
of the game in terms of the whole Mentimeter
thing.
If you haven't just note that at various points
along the way, you'll see this kind of a picture
pop up that says, go to the website, use the
particular code.
And if you have cell phones, if you have your
laptop's any way that you can access the Internet,
we would love to have your participation as
we go along.
Because like I said, sometimes it's hard within
a group of a hundred twenty here and potentially
up to a hundred and fifty or so who are listening
remotely to make sure that there's enough
airtime for everybody to speak.
But we definitely don't want to miss the wonderful
ideas, suggestions, reflections, critiques
that you have because we need them all.
So brief history.
I put up a slide that I realized nobody can
really read on the right side.
The point of this is that from 1971 to the
present, there's been a steady stream of inquiries
of activities of efforts to try and say, how
do we not just end with a scientific publication?
How do we not just end with a study?
But how do we increasingly reduce the burden
of cancer in the US and beyond?
Right because is that ultimately the test
of whether all of our efforts at the NCI are
fruitful.
There have been different steps along the
way.
This particular graphic comes from a book
that we were able to publish in November that
was a collected volume of where we have gone.
And really where do we want to go in Implementation
Science and across the cancer continuum.
But I'll maybe for easier legibility go to
a couple of the highlights that was chapter
by the way that Cindy, along with John Kerner
and Russ Glasgow had produced, which does
a really nice job of just talking about all
of the different events that have led us really
to be able to stand in front of you and ask
you for even more.
So some of those events in 2005, the NCI along
with NIMH and a number of other institutes
across the across the agency first came together
and said this is not just an agenda in any
one area of health research.
But it's really something that we see being
able to benefit across across the board a
couple of years later, as part of that initial
plan we thought wouldn't it be nice to have
an annual conference started out with 300
people the next one, which will be in December.
We expect to have roughly along what we've
had last couple of years, which is 1,200 or
1,300.
So, the field has clearly grown.
We have been able to argue effectively make
too sure that the center for scientific review
has standing a review panel.
Whether you come in under these program announcements,
or other or other applications, other mechanisms.
There's an opportunity to make sure, that
the expertise needed around Implementation
Science is around the table, reviewing your
applications.
That was a really great thing that happened
before we actually expected it to, which was
pretty cool.
Bob had mentioned TIDIRC, which we've had
the last couple of days that was preceded
by TIDIRH.
We're not very creative with our names., I
guess.
The Training Institute for Dissemination and
Implementation Research in Health, which started
as a joint effort with NCI with NIMH with
our Office of Behavioral and Social Science
Research, a number of years later.
Ross Brownson and colleagues at Wash U; Graham
Colditz who is here, started the mentor training
for dissemination, implementation, research
and cancer.
Just curious by show of hands, how many folks
in the room have participated either as faculty,
or as fellows or actually, in some cases as
both in TIDIRH or TIDIRC or MT-DIRC?
Okay, so this is good reflection that I think
the interest that you had in taking chance
a on joining one of these courses seems to
be maintained today.
That's very cool.
As Bob mentioned.
I'll go very briefly.
A little bit more.
We've been very fortunate to have the Cancer
Moonshot and really has Implementation Science
from the beginning as one of those core themes.
It was last year that we decided that, while
we'd had a lot of success across the board
as far as NIH was concerned and training folks,
we didn't have as much as we wanted, given
all of the scale up that we want to do in
the cancer space.
And so that was why we just have completed
our second year of TIDIRC, and there’s hope
that the field will agree that this is something
that we can sustain over time because we're
looking to get your assistance and trying
to scale out, scale up some of those efforts.
And, you know, here we are today with the
Implementation Science Consortium in Cancer.
So this is now I'm speaking in the present.
So just a quick, reminder for those of you
who have not paid as close attention to the
cancer moonshot as others.
It was very, very gratifying that when the
Blue Ribbon Panel that the NCI was able to
convene to inform us about, what are the next
set of directions in cancer research overall.
That they said one of those key themes should
be implementation science and so the working
group was that concentrating primarily on
what does that look like in prevention and
screening reflected back to us that it is
the suboptimal uptake all of these wonderful
evidence-based cancer prevention, screening,
other interventions, particularly among underserved
populations that's getting in the way of true
population impact or, at least the kind of
impact that we ought to expect from our research.
And that they did the sort of back of the
envelope discussion some of the folks are
in the room here that if we just say concentrated
on colorectal cancer screening and follow-up,
HPV vaccination, tobacco cessation that we
could make a huge additional impact of averting
hundreds of thousands of cancer cases hundreds
of thousands of deaths.
They reflected back to us that the key knowledge
area, right?
This isn't just about doing.
This is about learning about how to do better
than knowledge base around implementation
strategies what was truly needed to an active
evidence-based care.
And so that again, gave us the chance to then
say, with not all of the 400 million that
we have this year, but some of it to be able
to say, what are the kinds of opportunities
that would really catalyze work in this space
one of the first things that we did folks,
Sarah Korbin, Wynne Norton, Genevieve Grimmes,
from NCI are leading this charge was to take
a look at one of those specific areas that
the Blue Ribbon Panel gave us colorectal cancer
screening, follow-up, referral to care and
say, how can implementation science enable
us to focus on disparities to focus on rectifying
the gaps in coverage in terms of screening
follow -up referral to care.
And so that's been a wonderful chance to get
things going and we appreciate everyone who's
in the room or listening online who's been
a part of that so far.
We have a first cohort a second cohort of
upcoming of different trials.
And our hope really is that a much broader
scale we're able to figure out how to up the
rates, not necessarily at national level where
we're starting to make progress, but particularly
recognizing that we have populations, we have
locations where screening rates are far lower
than they ought to be and that's great.
A number of you in the room have been or listening
online have been part of the Cancer Center
Cessation Initiative, C3I.
Michael Fiore has been the lead of the coordinating
center from the University of Wisconsin Madison
again, 42 cancer centers, having a chance
to think through - how do we implement effective
interventions for tobacco cessation within
cancer care delivery?
And our hope out of this is that this becomes
a sort of natural laboratory.
You'll hear a lot about that the next few
days that enables us to really understand
what's working at different local levels?
What could be scaled up?
How do we understand the knowledge around
that?
Again, not possible without the Cancer Moonshot.
A third one that Bob had mentioned, and we've
had two rounds of RFA an focusing on inherited
cancer syndromes and really saying it's not
enough to be able to identify an individual
who may have a higher risk through genetics.
Or, you know, or other factors.
It's that we have to figure out how to maximize
the use of that knowledge to improve their
trajectories.
And so, again, a great chance that might not
have existed otherwise.
And then most recently, this notion of Implementation
Science Centers for Cancer Control, that Cindy
and April have been leading for us coming
this fall.
So we're not there yet.
But we see it again is another way to try
and fill in some gaps in, you know, in a certain
number of areas where we can have these sort
of natural laboratories, ongoing places to
study a whole range of studies, a whole range
of questions, that we can think more about
methods and measurement development that we
can think about common data.
And the added goal, which is in part why we're
all here today as a test case is to say can
we do a better job of building at the field
level in implementation science consortium?
And so, when we think about where this consortium
sits, it sits in the history of a number of
successful grantees are in the room and elsewhere.
Who have gone through the program announcements,
the various ongoing funding opportunities;
those who are, who have been involved in some
of these moonshot activities and will be going
forward?
And some of the cancer center supplements
that we've done that are targeting areas where
uptake of an effective intervention is low.
Not just smoking cessation, HPV, others.
And networks existing networks like the NCORP
and other cooperative groups that we see as
potential places where we can do more and
more and more of implementation science.
What we're here today to do is to think beyond
the individual project.
We're here to think about how can we do a
better job as a field and fostering collaboration
of thinking about different networking building
capacity.
But importantly, and you'll see this as we
go forward what are the sort of public goods?
What are the things that we could all benefit
from to be able to advance our own studies
but to be able to galvanize the field in ways
that we haven't been able to so far.
And that's really where we are.
So, when we think about the current needs,
well, we've been very successful in figuring
out ways to bring people together.
We haven't necessarily cultivated a sort of
field wide approach.
How do we do a better job collectively of
advancing implementation science in the cancer
space.
That we've done a great job, I think hopefully,
you would agree of finding initial training
opportunities, but we have such a need for
ongoing mentoring, for ongoing technical assistance.
That as Bob said to some degree, outstrips
the capacity that we have internally.
And so looking for a more field wide approach
to do that; to try and build more capacity
in the range of different settings, where
we want to do our work; and ultimately to
fuel a whole range of sort of next generation
studies that are as impactful as possible
are rigorous, irrelevant and are ambitious.
We don't want to continue to circle around
things that we’ve asked already but we want
to say, what are those next set of studies
that we need?
So, this is a village, a big village that
has large geographical territory, but we very
much appreciate the selected folks who are
on the slide who have been our steering committee,
are external facilitators over the next couple
of days because they are helping to make sure
that we get as much out of this, as we possibly
can so thanks definitely to them.
And then I just want to say, here are the
principles that we've set out.
I feel comfortable in saying, humble in saying
that we may not get there over the next couple
of days.
But this is at least where we need to be with
this kind of a consortium.
The first thing is inclusion.
The big tent kind of thing.
The idea is that we need to focus on how this
doesn't check people at the door.
This doesn't close off from the many different
stakeholders.
Not just researchers, practitioners, policymakers,
etc.
That ultimately need to be a part of this.
The reason why we have the capability of folks
listening in and hopefully participating online
is because we didn't want to make the room
capacity, which we know to be the limiting
factor for who could participate.
And so we will be working very hard to make
sure that we include as many different perspectives
as possible.
Along those lines, diversity of perspectives.
Diversity of the individuals is incredibly
important.
We need to figure out a, because way of the
diversities of the problems that we have out
there.
To make sure that the people who are participating,
reflect that diversity and bring those important
perspectives, important experience, important
expertise of all different stripes.
So that we, over time can make sure that we're
not missing things, that's incredibly important
to us.
Transparency.
We don't want to see the proceedings the discussion
to seem like it's locked and nobody has access
to it.
We are going to be working, incredibly hard
to make sure that whether it's the Mentimeter
questions that we are asking or it's the discussion
that we have in the small groups and large
groups are accessible and are as transparent
as possible because everyone no matter where
they are, ought to be able to benefit from
the good work that folks are putting into
together today, tomorrow and Friday.
And we want to be strategic, right?
We don't we want to be thinking about how
can this group best advance upon what we've
already done before and really taking some
time to think about prioritization of the
different ideas that we're going to brainstorm.
We're going to collect everything, but we
need your help in thinking strategically about
what are the directions that we need to move
forward with and we want to try and be efficient.
Right?
We know that in whatever we don't have infinite
dollars.
We don't have infinite time.
We don't have infinite energy.
And so how can we be efficient in trying to
think about what are the specific next steps
that the NCI should be thinking about and
trying to facilitate following these couple
of days.
Does that make sense so far?
Okay.
So just how we see it, and then there's going
to be an opportunity to hear to see how you
see, and these are what we see, as at least
proposed objectives for this sort of developing
consortium, right?
At the front, it's what are the public goods
for implementation science.
Like I said, it's really not as much about
a time when people are bringing their individual
R01s that they're looking to submit in the
next few months.
But about what are the things that we could
all benefit from?
What are the things that we see relevant?
They will help me.
They will help my colleagues.
They will also help the broader community.
That we do hope that this will foster collaborations
that, as you all have a chance to network
that you all have a chance to get together
that you can identify areas of common interest.
And hopefully this will be an opportunity
to connect after Friday and just ongoing.
Okay, because we haven't well, we've had that
at the broader sort of NIH or even other agency
level.
We really haven't done that as much in the
cancer control space.
And so we are excited about that would love
to improve again different networking strategies
and dissemination.
We don't want to see that the results that
the learnings from all of the different things
that you've done get locked anywhere.
And so, how can we figure out better ways
through this consortium to improve dissemination?
And then importantly, we and with certainly
our steering committee, giving us direction,
want to target areas of the field that we
haven't done as much in that we see is deserving
of a few days of sort of deep thinking, deep
strategizing expanding.
So at this point, we'd like your feedback
so we do want you again, you'll see this to
go to Menti.com and you'll use that code on
whatever device is you have and we'd like
to gather over the coming minutes, your reflections
on what you think in Implementation Science
Consortium in Cancer should accomplish could
accomplish etc.
So I'm going to do a magic atl+tab and I'm
going to move on to.
I think this will work?
Yes, look at that.
Wow, technology is awesome when it works.
And so, we just want to give you a chance
again, like I said, this is information that
we want to collect, because it's important.
It's important to make sure that it was worth
thinking about the next couple of days as
well as beyond.
That you can help us anchor in what you think
the consortium should do.
Okay.
So, I'm just going to see, okay.
I've got we've got a few coming in.
This is awesome.
This question will be open.
So, even as I move on, we'll come back to
it, but this is, this is where we would love
to have your advice, your counsel, your wisdom.
Okay.
So, what the are next few days going to look
like?
In the mornings, you'll see, we are gathered
as a large group.
The point of this is to think collectively
about areas that we see is really important
and potentially stumbling blocks that hopefully,
we don't want to just sort of wash away, but
need to take on centrally.
The first one which will produce, which will
proceed after this after this particular intro
is really taking a look at we have implementation
science that arose out of this, need to make
sure that research didn't get stuck without
being able to influence, practice and policy.
But we really want to ask ourselves whether
by virtue of trying to legitimize, which I
think we've done pretty well implementation
science as an area within, you know, within
various scientific institutions, research
organizations.
If we've in some way, turned away from the
very stakeholder the the very purpose is that
we set out to do our work.
Right?
So, the best results from an implementation
study, if those don't get disseminated, if
we don't figure out how to get the strategies
implemented, then what was the point?
And so we want to start out with again getting
your feedback as well.
We have a panel to reflect on – is this
a problem?
What are the kinds of things that we can do
about it.
Tomorrow when we reconvene will have some
small group report outs and I'll get to the
afternoon in a moment.
And then we'll have a second sort of town
hall.
Some of you may have tuned into our Fireside
or campfire chats.
The, these are the monthly webinars that we've
been doing.
So that both this first panel, and the second
one will resemble that, where its parts facilitated
discussion.
We've got a stimulus presentation tomorrow
about Implementation Labs, and then more facilitated
discussion and questions, comments from all
of you as well.
And again, we're trying to capture all of
that.
The small group report outs, eventually, we'll
have at the starting of the second and third
day the report outs of what you all did in
your various small groups in the afternoons.
And then we'll finish on Friday with a review
of the proceedings.
And we want you to have the chance to take
look at what's presented to you and tell us
where we're missing things, where we've gotten
things wrong.
So, our hope again is that we leave on Friday,
mid-day with a shared understanding of what
happened over the couple of days, and a shared
understanding of the kinds of priorities that
we need to take forward.
So, the afternoons are about small group brainstorming
as I said, the steering committee identified
with us seven different themes that we felt
like, were underrepresented in our ongoing
efforts and portfolio.
So, precision health, economics, rapid cycle
design, the implementation labs that I mentioned
before policy and equity and how they interface
with implementation science; as well, as technology.
You'll hear pitches brief pitches from the
facilitators of each of these groups, so they’ll
orient you to how they want you to be thinking
about these things.
But this is really the core of what we want
to leave with, and we want to be able to advance
these particular areas.
And then we'll want your feedback on what
are the areas that we're missing?
What are the areas that we should continue
to prioritize?
Are there things where we feel like there's
enough and we can move on move on.
So, how does the small groups work today in
the afternoon, the emphasis is really a generating
different ideas.
So it is a sequential brainstorm, where we
really want you to say, if I’m being asked
to advance work in this area, what do I need?
What would help me and my colleagues to move
forward?
You don't necessarily have to feel like you're
the expert in that space, but you are because
you can help reflect.
What do we have in the field?
What's accessible?
What helps you do your work?
What would help you do it better?
And try and brainstorm those ideas.
Day two the facilitators will get together
at the end of today to try and distill all
of that great brainstorming into a set of
ideas that they think we would be well served
in spending our time on expanding, and so
in the afternoon of dates to you'll be going
to a separate small group where your task
will be divvying up the main ideas and really
trying to work up - How could these be expand
ended?
Might they need to be modified?
To be able to put some flesh to what is ultimately
sort of a bulleted idea.
And then on the final day, it's about getting
your help in prioritizing among these different
things.
And so you'll see those of you in the room.
You'll see on the back of your badge, a number
of different stickies.
These, it'll be explained again later in case
you forget because I may, but this is a chance
to signal priority, highest priority of the
different ideas.
And so we'll have sort of a gallery walk in
that final day where folks in the room can
signal their priorities, and we'll have Mentimeter
equipped, so that folks were not in the room
can be able to reflect on that.
Again, the mornings of the next couple of
days will be report outs from those small
groups.
So, that anybody who's not in the room, who
wants to participate, who wants to have a
sense of what's going on, we'll have that
sense and all of that material we will make
at least as much as we possibly can we will
make available on the consortium website following.
So really it is an effort to try not to lose
anything.
So, we do want throughout this, hopefully
you to be reflecting on how far off are we
with its initial conceptualization of this
kind of consortium set of activities?
Is this, are these few days a workable model
for future years to come?
We have built in an expectation into the moonshot
centers, RFA for, which will be able to pull
things on and, you know, pull things together
in the fall.
An expectation that there is an annual meeting
that pulls people together.
So, this is our test case.
So, we need your best advice on what are the
kinds of things that are useful, and we should
keep, what should we not do again?
And we ask that because we want the answer.
Don't worry about hurting her feelings will
cry, but then will appreciate the suggestions.
How do we move this kind of consortium forward?
So thinking more, you know, we tried to do
a balance of open registration with some of
the folks who are helping us plan this thing.
But we know that we didn't necessarily get
the perfect mix of folks, and we want to make
sure that over time the who the, what the,
where the, when the why the, how we get better
and better.
So your reflections on that will be really,
really helpful.
There will be a brief survey that you'll be
given.
But we really do want this to be an open line
of communication.
I'll tell you that I'm the only one who sees
my email, who answers my phone, who tweets
when I do Tweet.
Oh, and by the way, if you are tweeting, #ISCC19
is the is the hashtag.
I think that yeah.
Okay.
We're good.
So please do because we really want these
discussions to be, as I said, accessible.
So ongoing feedback is welcome, and I'd be
happy to have anyone call, me email, etc.
And I would love to hear from, you.
So, getting back to just out of curiosity.
Okay, good.
We have 89 people so far who have weighed
in on what this implementation science consortium
can accomplish.
And so you can see as I can they're going
fast and furious, but it looks like it's about
coordination.
Right?
It's about reevaluating what do we expect
out of our studies?
What is the status quo and how do we move
forward?
Great point about this is not just something
where we're thinking what's best for researchers,
but clinicians, administrators, etc.
Priority setting right thinking about areas
that we need to expand the field; publishing
papers, bringing new folks in.
This is awesome.
This is exactly what we want.
So, thanks to the now 91 of you who have weighed
in this is over the next few days, this is
crucial for us to make sure that we capture
what you're thinking, at any point in time
we recognize in large rooms or remotely it
may be hard to get your specific thought at
that time in the conversation.
This is our best effort to try and do it and
again a pilot, but so far it looks like it's
working well, so thank you for that.
Okay, so I just want to I'm just conscious.
Okay.
We're good.
So just wanted to open it up.
What questions do you have?
Is there anything that I said that Bob has
said so far that would require further explanation,
that's concerning, that's enlightening, that
is not clear, etc.
And there is there are mics at the tables.
There's actually stand one, if anyone wants
to move over there as well, but please, what
does that end and basically, probably in this
case show of hands, if anyone has any questions
or thoughts?
Yeah.
ATTENDEE: How we going to measure success?
DAVID: Success of the consortium?
That is a really good question.
And that is so so there are a few things that
that we will be looking to.
Okay.
So one of them is, you know, over time are
we seeing and we're tracking this, so we continue
to do portfolio analyses and so, at one measure,
we are tracking over time the content of the
portfolio over time.
So one measure of success, we identify particular
areas that we want to see growth in is that
then being reflected back in the shifting
applications that are coming in.
That's one number two, we are also with these
different, hopefully over the next few days
we'll have somewhere between maybe 20 and
30 different ideas, maybe a few more.
Different ideas across the different groups
that are candidates for moving forward.
If we at the NCI can take a look at those
ideas and do our own scan of what who's doing
what in the area, what are the ideas that
are not yet being covered and we can figure
out a way to be able to move those forward.
That's another success.
The third thing, I think is overall engagement,
overall feedback, overall ability to get your
advice.
That's already success because we absolutely
need a way a mechanism on going to make sure
that everyone in the room, and wherever can
feed back to us.
What are the things that we need to do better?
What are the things that that we should deemphasize?
Potentially.
What are the things that we really need to
need to ramp up?
I think the success will be in a, you know,
in potentially papers and collaborative activities
in the potential for some of these areas to
possibly spin off as networks, or as a, as
sort of working groups where they would define
that.
So, I'll give an example that actually, you
know, we can't take any credit for whatsoever.
One of the areas that we thought would be
helpful to focus on is this interface between
health economics and implementation science.
And we were struggling with and folks in the
room will, I think, back me up on this, we're
struggling with figuring out what would be
the right way to try and get some of those
activities going.
Completely separate from us, folks within
the VA had had a very similar thought and
started to convene folks in the field to puzzle
over – what do we need to do and the economics
and implementation science space?
And they have these monthly meetings we're
trying to make sure that the discussions that
we have here flow into that conversation,
but we're also seeing the potential for some
of these other areas to similarly breed ongoing
discussions, ongoing sort of forum where people
can get together, does that?
Okay cool.
Great question.
And we struggle with that every day, I think.
Yeah, Katie?
KATIE: Oh, it's freestanding, I can like yeah
get up and jam.
Hi.
So, I'm curious to know whether the science
of team science team at NCI is involved?
Or to be involved in this consortium?
There are a lot of different disciplines that
are represented here, and implementation science
has its roots in a couple of islands.
And one of the things that's been my experience
in going through the TIDIRH course, and in
talking to some of the people who are more
senior in the field is that some of that disciplinary
history remains in, in the thinking about
methods and developing the rigor of the science.
And it doesn't necessarily fit with new disciplines.
And so it strikes me that there is some value
to some of the excellent work that's been
done on the development of team science teams
here.
DAVID: Sure, sure.
Yes.
A great ask.
I know that Bob has been tracking the science
of team science initiatives so he's going
to…
BOB: You know, that's good suggestion.
And there and for those of you to some folks
in here are been involved in those things,
you know, there's been some similar trajectories
in the building of that field.
So, we now have a conference that sites conference,
just like, we had the D&I conference whatever,
and then also, you know, the journal special
issues, the other specialists will workshops
and similar to IS we've also been moving in
between NIH level and NCI level and interagency
stuff.
So, Amanda Vogel, who's one of the original
members of our team science group here is
actually going to be moving over to NCAPS
to help the CTSA program implement a lot of
the science of team science principles and
that program.
Because in each of these activities we are
always looking for, what are the, what are
the other big programmatic activities that
can resonate, or connect in terms of scale
up and implementation and, and not limiting
our universe to NCI funded centers or programs,
or our phrase blah, blah, blah, yadda, yadda.
So trying to look across agencies, across
Institute's, etc.
The other thing I'll put in the plug in, because
some of you are involved this is in late August,
we're going to be publishing our kind of culminating
book, similar to the one on the science of
team science, and there's a whole number of
chapters, and they're kind of on the evolution
of that, on the development of measures, and
theoretical methods to spinoffs of the activities.
But but we, but are very mindful of, across
these different initiatives, trying to see
what carriers over.
One of the things related to IS and that some
of you involved were several years ago in
a an initiative, we called the theories project,
and many people involve Alex Rothman, Sarah
Kobrin, Neil Weinstein, etc.
And that was a whole effort to try to improve
the use of health behavior theory, testing
and evaluation, and studies.
Not just kind of throw away.
Gee, you know, I cited reaim in my introduction
therefore, this makes it an implementation
science project.
So, and, and relevant to the metrics question
that Graham had a David about, IS.
One of components the of the theories project
was not just the evangelism about, okay, people
let's be a little bit more rigorous about
theory testing within your project within
your grant.
Not just applying the concepts to determine
what variables are in your regression equations.
Was we went back and over pair time we coded
the content of grant applications coming into
our behavioral research program in the degree
to which the study design actually provided
a specific test of a theoretically derived
hypothesis in the project.
You know, as a metric of progress and one
of the things that that the working groups
and theories project generated again, may
be parallel to is early on was ncnc are we
needed to develop resources to make it easier
to use theoretical constructs in studies but
also provide assistance and finding validated
measures of those constructs.
So so we were evangelizing, you know, in 2001.
okay.
People enough with the health belief model
for God's sake.
And that was a start.
But a lot of it came down to prove, what are
the tools that the field of cancer control
needs in order to do more, theoretically rigorous
research that's going to advanced and evolved
health behavior theory.
And then we brought in other disciplines and
other projects develop the GEM resource, some
of you have used theory at a glance, which
against those very controversial, but so said,
oh, that was the wrong thing that was too
superficial.
You know but, again, that was, you know, 20
years ago, whatever, but I think that similar
kinds of things and that and that, as your
suggestions come forth, and as we are implementing
those suggestions, in terms of creating resources
for the field of implementation science.
For each of those components, we want to also
then develop the metrics for: Is as having
an impact as being used, how can it be modified?
Is it changing the science?
Both in what people propose and what people
publish how evidence get synthesized how evidence
gets used so so welcome your ideas on that?
DAVID: Yeah, and there's certainly been a
number of projects internal to our group where
we have relied heavily on Kara Hall and others
for trying to think about what are the kinds
of questions that help us get at this how
do we build better teams?
And so, while it's not necessarily a, hasn't
been explicitly, so this is the reason why
you're all here is to really help us think
through what are we missing?
What are the kinds of resources that we should
build the theories project?
Also, you know, our folks, Margaret Farrell
in the lead developed implementation science
at a glance.
Which was sort of model off this notion of
theories at a glance.
And that was because again, we're realizing
that there, we can't expect that everybody
has access.
And particularly as we're thinking about what
practitioners’ other stakeholders might
need access to all that's now available around
implementation science.
So just to say that thinking through different
ways that we can disseminate our information
different ways that we can capitalize on what
other fields have been doing; we don't want
to reinvent, you know, square wheels and things
like that.
Other questions?
If it's just a few minutes left, and can we
see whether folks are asking questions on?
Okay.
Okay so far no.
Okay.
We can.
Okay.
So, again, just to say, just a point for those
who are not in the room, the goal really is
if you have questions to, I think you can
put them in the chat feature and Jennifer
is helping us to monitor that.
Other questions they have.
Sure.
So, got the nudge another thing that we have
been doing, because we recognize that while
again, these two and half a days are principally
about, where does implementation science go
forward?
We've also wanted to make sure that implementation
science doesn't continually be seen as the
end of a scientific effort.
And so we've been working on different models
to make it easier for people who might be
earlier on, in the research continuum, to
figure out how to think with a sort of implementation
science lens, with a business lens with, an
entrepreneurial lens.
And so few a years ago April Oh and Cindy
Vinson pioneered a new training for behavioral
interventions that was based on a model that
the National Science Foundation had run called
ICORP.
The model our version of it was SPRINT, which
is speeding research tested interventions
played around with the acronym of touch.
But it's got jogging shoes.
It's all it's all good.
And we've now had four cohorts of SPRINT,
the purpose of SPRINT was for folks who had
an intervention, and where I had either completed
a trial, or were in the midst of studying
that that mention could be thinking about
what's the ultimate marketplace for this?
Who are my customers?
And can I, with entrepreneurial help and mentorship
go through a process of rethinking or testing
out some of the assumptions that I made about,
whether my intervention truly fits the circumstances
that I think it should in order to be used.
And so the SPRINT model has now, is it 37
teams maybe?
About right.
Okay plus or minus we have confidence intervals
about 37 or so teams of investigators collaborators
mentors, etc. have gone through this process.
The point of it was again to try and wake
us all up earlier to be asking some of these
questions.
We, I think are very focused and implementation
science about what are the assumptions that
folks are making about the evidence base,
about our interventions, about the settings
that may need them, about the people who hopefully
will use them in our goal to is say, let's
not just assumed let's test those assumptions.
So, SPRINT has been another opportunity and
were we've gone through, like I said, the
the sort of 30 plus teams trying to figure
out is this something either that an NIH level
we should do?
Is this something that still has as clear
Adam and maybe for other types of interventions
in the cancer space?
So that's another one where we love, we love
any feedback.
And anything else on that?
Okay.
Any other questions before we transition?
Oh, yeah.
So we can, let's say we can a hundred and
eight.
Okay.
Yeah.
Yeah.
So, right just just tracking in our wonderful
de-link IS from mental health.
Okay.
That might I would love whoever, whoever,
wherever they are wrote that if that might
be a personal dig because I spent 13 years.
Yeah, I spent 13 years in the National Institute
of Mental Health, and if I haven't, if I haven't
sufficiently de-linked, that's because of
my own mental health challenges, which we
all have and is very important.
But, yeah, so anyway, on all of these comments
very, we much appreciate your continued.
This is great.
A hundred nine folks and keep them and oh
Sarah?
Oh, sure.
Perfect.
HEATHER: Thank you.
Hi, Heather Gold.
One of the things I noticed with each of the
different workout workgroup breakout groups.
Is there actually could be a lot of cross
fertilization.
And to keep that in mind.
And I know people are mixing across groups
and such but bringing what you talked about
in one group to another group could be really
useful in that sort of team science thing,
and also for the field.
DAVID: Yeah.
Great point.
And again, one of the reasons why we want
to make sure that the discussions, and all
of the group's get summarized, and reflected
back at the front of the second day, the front
of the third day.
And also why we want everybody to have the
chance to walk through and take a look at
all of the different ideas that we generate
because absolutely, I mean, we know that the
different topics were not mutually exclusive.
There are overlaps and there, their domains,
which we felt okay if you just look at that
slice this is an area that we need more work
in but by no means, do we see them as completely
independent.
So, yeah, that's a great point.
And you'll be rotating through three different
groups and so the hope is that, as you go
through the first group, you're thinking about
that topic, as you go to the second group,
you might also be thinking about oh, what
was it that we were focusing on in that prior
group that I might bring to this one and so
on for the, the next day.
But yeah and and it may be that different
groups yield very similar suggestions that
we might want to combine in various ways.
So yeah.
Great point any other final points before
we move on.
Okay.
Hundred and thirteen.
This is awesome.
Think I yeah.
Oh, so yeah, Sarah is taking the mic.
SARAH: I want to take a moment to say we are
going are to switch over to our panel presentation,
which is an exciting panel.
So I'm going to want one invite our panelists
to come to front the of the room.
Two his is also an opportunity for them to
get up and stretch.
Because we've been exciting hour and a half
left before our next break.
DAVID: We, we've been told that the seats
are very comfy, very comfy.
They have nice a give you can lean back a
little bit, not too far, because you'll fall.
And Sarah has an announcement to make.
SARAH: So briefly as mentioned your afternoon
room assignments are located on the back of
your name badges.
If you for any reason, my magic has failed
us, you do not have room assignments for the
afternoon sessions please meet me outside
of this room during the lunch break, and we
will make sure that, you know, where you are
going for the afternoon.
And the hush has fallen.
So, I will turn it over to you, David.
DAVID: It was awesome because I was about
to we, those of you who have been part of
our TIDRIH, TIDRIC, various trainings we try
and do icebreakers.
The one that we did this week was a sort of
guilty pleasure movie.
You know, so we got a lot of great suggestions
from, from our fellows from our faculty alike
of movies that we should really be watching
when we don't, we can tune out our brain and
just kind of, you know, mine was Hot Shots.
I don't know if anyone's seen that it's the
Top Gun spoof from, like, 1987 or something
like that anyway.
Yeah, good you're all listening.
We almost did favorite karaoke song.
So if you're not careful, we will go around
tomorrow morning if the breakout sessions
do not go to our liking and people, whoever
you are, will be asked, not just to say the
name of the song, but to perform.
So, mine would be Daydream Believer by the
Monkeys, but I'm not going to sing it right
now.
Okay.
So, as I said, before what we want to do in
the mornings is trying to engage in more of
a sort of town hall discussion to think about,
what are the kinds of challenges that we might
really want to not just sort of let go?
And this first one is a big one, right?
I mean, again, the idea that we have made
a lot of progress in cancer centers, in academic
departments of legitimizing implementation
science to the point where there's any number
of open advertisements, and please, if you're
looking for a job, there are lots of them
out there.
And implementation science, increasing numbers,
we know that; but we didn't want to lose sight
again of the reason why many of us started
along this path.
That fundamentally there is effective care
that is not being received at every stage
of the cancer continuum that could be.
And so we wanted to start out with our esteemed
panel, and all of you who are in the room
and elsewhere.
To think through the elements is this a problem,
right?
So do we truly have a challenge that we need
to put front and center?
That implementation science isn't as linked
with implementation practice and policy as
it ought to be; and can we be thinking about
strategies, suggestions, ways in which we
cannot lose sight of that ultimate impact
that we want to make.
We said that implementation science should
be sort of a winning, sort of a win win win
win win proposition where, if we do things,
right we're advancing knowledge, were advancing
health, were advancing access, were advancing
quality.
But if we don't again test that assumption,
and if we don't think about, who needs to
be not just in this room or listening online,
but who needs to be part of conversation.
We should take a hard look at that and not
just assume that we've got it.
We're good, because we're not yet there, right?
This is emergent, this is advancing, and this
is a chance for us to say we talked about
that 17-year, fourteen percent gap thing.
The last thing we want to do is extend that
gap, right?
Because we're taking a detour to start studying
about these implementation processes, but
not thinking about how they ultimately translate
into better lives, which is what we're all
trying to do.
Okay.
So, the next thing, you know, the, the next
balance of the hour is really about, with,
with our folks here, you'll see they're lovely
pictures there and they're even lovelier pictures
here.
You know, Rinad, Rani, Karen and Russ, all
of whom are we've asked to, you know, give
some of their thoughts about this sort of
research, practice policy pathway and what
can we do about it.
So so Rinad who's directly to my left from
the University of Pennsylvania I've known
for a while now.
And she is someone who sort of started an
implementation science in grad school, was
thinking a lot about that sort of efficacy
to implementation side of things.
And has since graduated and gone through TIDRIH,
gone through the Implementation Research Institute
at Wash U and transitioned incredibly from
somebody who, you know, was sort of seeking
guidance to someone who is constantly providing
it.
And someone whose focus has been mental health,
but as often is case the, when you're good
there are people from all over the research
continuum who want your advice, who want your
guidance and who want your collaboration.
And so Rinad is here and is currently a PI
of of an NIH funded center around behavioral
economics, mental health, and implementation
science.
Next to her is Karen Emmons; who many of,
you know.
Karen is a past president of the Society for
Behavioral Medicine.
She has been influential as a member of our
board of scientific advisors, has been incredibly
generative, accomplished, an incredible advisor
to many of us in the field.
And we're just very grateful to, you know,
to have her wisdom was part of the, the working
group for, you know, for the, for the Moonshot
that helped us think through Implementation
Science.
And has been a wonderful colleague, and really
a guidance for us all.
And next to her is Russ Glasgow, who has not
been mentioned before.
Russ was my predecessor, I, when I was at
NIMH, focusing on mental health and implementation
science.
Russ had come to the NCI to know you help
the team helped us all think through “where
do we need to go”.
Many of, you know, that Russ is responsible
with colleagues for REAIM, which nobody ever
uses nobody's ever heard of, and just so much
wisdom so much guidance and also has been
steadily challenging what are we doing?
Are we doing it well?
Should we be rethinking whether its efficacy
and effectiveness, pragmatic trials, implementation
science?
Where we going?
And so great to have Russ who's currently
at the University of Colorado, Denver.
Karen, who I skipped over being at the Harvard
School of Public Health Service.
And then on the end Rani Elwy, who I also
met when she was a fellow at the Implementation
Research Institute at Wash U, a number of
years ago.
Rani has had a distinguished career within
the VA.
Really providing across again, a whole range
of different sort of health topics that implementation
science expertise, qualitative research.
Really just helping the broader integrated
delivery system to figure out how it can learn
from efforts to improve care within the QUERI,
the Quality Enhancement Research Initiative.
She recently, is it as recently?
Two years?
One year?
Okay, good.
One year, so, recently recently moved from
BU, to Brown University, which is where I
did undergrad.
So and haven't been there since, but I love
it.
And it's a real, it's just for some, and they
haven't invited me back, I think, is the truth.
So I'm really just trying to broker future
travel for myself.
But, yes, so she is on College Hill at the
Brown University.
And so again, we wanted to invite each of
them, you know, to start out with a few comments.
And this is how we do some of our fireside
chats, just briefly giving their sense of
how did they first get in excited, enthusiastic
think about implementation science and then,
you know, whether they're concerned sharing
your concerns if you have them about this
gap between implementation science practice
policy.
RINAD: Okay, alright so David already very
kindly shared some of the introduction of
how I came to implementation science.
I consider myself incredibly lucky have to
kind of grown up as an implementation scientist
in my professional career.
And I came through the pathway that we often
see was training in a treatment efficacy,
you know, treatment development lab.
And I was a clinician, I'm trained as a clinical
psychologist, and I saw a number of kids who
came to the clinic that I was training in,
which was providing an evidence-based practice
for childhood anxiety.
And those kids had sought out services in
the community, and they hadn't gotten better.
And they would come to us and receive a dose
of cognitive behavioral therapy.
Sorry guys, I'm going to be talking about
mental health for a little bit.
And and what I came to realize was that this
observation really wasn't idiosyncratic to
CBT for child anxiety.
And that this was really a larger issue in
the field.
So somehow, and I don't remember how I heard
about the first NIH, dissemination and implementation
conference, which was in 2007, which was my
second year of grad school.
And I rallied up the resources to come down
to DC and attend and I'd also like to say,
I've been to every single one.
I feel very proud about that.
And I submitted a concept paper, which turned
into my predoctoral F31 and I got the pleasure
of having David Chambers review that concept
paper and provide me technical assistance.
And so he really guided me in right the direction
towards my initial research agenda, which
was about training community clinicians in
CBT for child anxiety.
Subsequently, I had a K23 also from the NIMH,
and I had the opportunity to get trained through
the NIH Institute's as David mentioned.
So I was a in the first cohort of TIDRIH fellows.
And I also have the great opportunity to be
in the Implementation Research Institute.
So now I'm at Penn, and all I do is implementation
science.
In fact, I use it as my identity prior to
saying that I'm a clinical psychologist.
So that's how all in I am.
And, you know, I direct my own research program
and implementation for kids around evidence-based
psychosocial programs, but increasingly have
been working with a number of different investigators
across disease areas, because many of the
problems that we have cut across.
And so, in terms of answering the question
about what I might be concerned about, I wear
a lot of hats.
I work closely with payers’ and policymakers
around implementation of evidence-based practices,
and I also trained a lot of people in implementation
science.
And I've begun to observe this phenomenon
where I have so much to offer around frameworks,
you know, we have a hundred and fifty-one.
That's the latest paper, Sharon Strauss group
just published.
I can tell people about implementation outcomes,
barriers and facilitators, implementation
processes, but when our stakeholders really
pushed me on implementation strategies and
how best we might implement.
I often say that we're still kind of building
that literature.
And and that's not really satisfying for me.
There was a metanalysis or a review done recently
by Jeremy Grimshaw and colleagues.
And I really think that the literature was
equivocal about how best to implement.
And so, then I'm using my best guesses, rather
than using the literature to guide my colleagues
and stakeholders.
And I'm really unsatisfied by that, especially
because I think that the whole point of implementation
science is to move the needle on, have impact.
And so, my big thing.
And what I'm really interested in is more
rapid ways to produce actionable knowledge
that moves the needle.
KAREN: Thank you.
Well, I kind of go back a little bit to the
beginning at a time when we were talking more
about diffusion and about dissemination.
We didn't have really a lot of chatter about
implementation or science in there, although
most of us in this field were thinking a lot
about the research and scientific implications
of dissemination.
And I believe in your little history lesson
there, I was among the first recipients of
the supplements for dissemination grants.
The first time really NCI gave get money for
this.
We had done an efficacy study, looking at
childhood cancer survivors who use tobacco,
and these were young adults, and their tobacco
use rate was pretty high.
And our intervention, which we developed thinking
a lot about how can be disseminated really
trying to think about scale.
We weren't using that word then, but we were
thinking about, how would we get it out there?
Was a quite effectively double the quit rates,
and those are young adults, and we were quite
happy with that.
So, then we had an opportunity through the
supplements to really think about what's the
next step as we move more towards a broader
advocacy study.
So, we use that supplement as a way to go
out to all of the, a childhood cancer survivorship
programs, and really understand what a provider
thought about doing smoking cessation and
integrating that into their cancer care survivors.
And hands down, they said, absolutely this
is one of the most important things we can
do.
This is even more important than some of the
diseases surveillance things we can do.
I'm yes, we'll use our infrastructure for
this.
We built a fabulous fabulous set of activities
to move to intervention into these care systems.
Beautiful.
We built it and nobody came.
It was really humbling.
John Koerner love to tell that story can get
one of our supplements and it was a disaster,
but you learn a lot from disasters, right?
So, we really did learn a lot.
And that was kind of where I got the bug.
It was like, oh, this isn't quite so easy.
It isn't just here it is go forth conquer.
So we, that was really the beginning for me.
That actually was a lot to do with why for
three years, I went to Kaiser Permanente and
was there and want I to talk a bit little
about that experience as a researcher.
And sort of overseeing some of the research
activities and really trying think to about
how systems think about integrating research
into their efforts to improve care.
And so, it kind of leads to my gap area area
that I worry, we're not paying enough attention
to, which is some variability.
At Kaiser, it was so fascinating to me to
watch how the clinicians paid a lot of attention
to variability.
They tried to deeply understand, why are we
doing well in this clinic and not well at
this clinic?
Why are things working differentially?
And as researchers, I don't I think we just
try to beat the heck out of variability, right?
Let's get rid of it.
And in fact, it should be our friend, and
we should try to understand it better.
And think more about why it is that some community
health centers can actually implement colorectal
cancer screening program and get high rates
of participation and others can't?
Or why can that particular program do really
well in colon cancer screening and really
poorly in HPV vaccination?
Is it just a champion?
Maybe?
But I really think that thinking about variability
at lots of different levels and trying to
really embrace and understand that is something
we really need to grapple with.
RUSS: Okay, good morning.
Well, if Karen was there at the beginning,
as you can tell from my gray hair, I was there
before the beginning, but actually what we
now call implementation science.
And just briefly a quick historical note.
I started out my career doing individual behavior
change, particularly smoking cessation.
And my story and conversion experience had
to do relatively early on in my career where
the field largely funded by NCI had reached
an incredible milestone and that through several
different initiatives, we had these incredibly
well, validated interventions in different
setting.
Or purposes in primary care and hospital-based
cessation, in school-based programs, in media
programs; and again, these were world class
RCTs replicated and that in that sort of thing.
Well, the idea and the story that in this
room, probably only Michael Fiore and Bob
might remember is, we were going to put these
together into a program that we integrate
them and would produce community wide population
impact called the COMMIT Trial or Study, community-based
intervention trial.
I believe maybe, Bob or Mike, might know it,
it's still the single largest smoking cessation
study?
Maybe ever, ever done.
So long story short.
We had the world's best people, and it was
community cluster, randomized trial got together,
and our idea and our wisdom will just put
these together and do all these at the same
time in the community.
And so, I remember still, I led to our worksite
group going out here, because we knew how
to do this and had our protocol.
And I can still remember today going without
to our group of stakeholders were going to
work with us our partners with our hundred
and fifty-page protocol, welcoming them to
sit down and say, okay, wonderful having you
went all your input, here's the protocol if
you just turn through that we're going to
do.
So, it actually wasn't a complete disaster,
but the results were more modest than that
than we had anticipated.
So that's my story.
Like, everybody else.
I'm concerned about where we've gone.
What's facing us.
At the same time I really think we should
celebrate where we've come, because it's incredible
even given a decade ago.
I think to very briefly related things; I
may be most concerned about the first one
is unlearning.
What many of us are at least, what I've been
taught that is good science, okay?
And what you need to do the way it's done
and what we've internalized or valued as high
highest quality science, and what the system
for professional advancement promotion and
tenure values and puts in there.
The specific thing that I think at the time
that I'd really like to see us address today,
is the replication crisis, that we have not
only in our field, but I think goes all the
way to bench a bench, basic bench science.
And I think one big part of it and what we
could do to address that is more transparent
reporting.
And particular reporting of issues, like,
context about exclusions as well, as inclusions
at multiple levels from the settings to the
staff, the intervention agents, and not just
the participants of the patients.
But that and honest, transparent reporting
on cost as Brian Mittman in the back has just
published and presented some great work on
adaptations.
What adaptations were done to the intervention
in the implementation strategies?
And is one example of beginning example I
think of that are the story criteria that
many of, you know, for reporting standardized
reporting on implementation science.
RANI: Thank you.
Well, it's wonderful to be here.
I did my PhD in health psychology, so interested
in a lot of behavior change, and a lot of
provider behavior.
And I did my PhD in a program that was embedded
in a hospital and I never really realized
it was an embedded program until quite recently.
But from day one, my PhD advisor said you
need to know what everyone is thinking in
this world, in this space, like the providers,
the administrators, the patients.
So I was told to go out and talk to people
before I even develop any research ideas.
Which I now realize was a really big emphasis
on stakeholder engagement.
But we didn't call it that back then, it was
really learning from people who are experts
on the ground.
And this training was really valuable for
me when I started my postdoctoral fellowship
and health services research in the VA.
The VA has always well, since 1998, not always,
but for a long time, been very putting a lot
of priorities on the knowledge translation;
trying to get evidence into practice into
usual care and with the development, the of
QUERI program, which David mentioned the quality
enhancement research initiative, which is
now directed by Dr. Amy Kilbourne.
And QUERI was not at my particular center,
which was the Bedford, VA medical center in
Massachusetts.
But in 2006, we had the HIV hepatitis QUERI
program come to us through the beginning of
Alan Gifford’s tenure at our center.
And I was given the opportunity to take all
of my stakeholder engagement interest and
become what was called an implementation research
coordinator for this QUERI program.
And that meant that my job was to work with
a lot of MDs, a lot of PhDs.
But mostly MDS who really were trying to do
implementation science and had no idea what
it was.
My training up until that point had been through
these excellent programs that Dr. Brian Mittman
had developed called enhancing implementation
science within the VA, which were open to
people outside of the VA.
But at the beginning, that was really all
there was in terms of training for implementation
science.
These started, started as in-person trainings,
and they became webinars and in fact, they're
still archived, and you can, you can see them
online.
And I was lucky to have Brian as a mentor
from a very early part of my VA career.
And then I joined the Implementation Research
Institute.
So, from very early in my career, I became
somebody who was training others and that's
what I'm now doing at Brown University and
the director of the implementation science
core.
So, I, all of these things make me realize
that one of the challenges that I think exists
in this field as capacity.
So, in the VA, I had the opportunity to work
with clinical operations on a lot of different
implementation science project, which has
been so valuable.
People who really are wanting to implement,
but don't know how, and we do the whole trajectory
of developing the evidence base, testing it
out, and implementation project trying to
move it into usual care.
But what happens is our sustainability is
very poor, because they want us as the scientists
to also remain as the implementers with them.
And that is something that I think is a real
challenge for this field.
As a scientist, I would like to understand
why strategies work in one setting versus
another.
I don't want to be the person on the ground
necessarily implementing something.
And when I have engaged in these conversations
with clinical operations leaders, their point
is no one else can do this.
Which, I don't believe.
I believe that other people can do it.
But this is their reaction to it as they don't
feel like they have a capacity to do this,
I don't want to leave the program, because
I don't want what we've worked so hard to
achieve to fall apart.
Maybe if we focused on sustainability from
the beginning?
Maybe this is something that wouldn't have
happened?
But I do feel that there's a challenge with
capacity in the field, and we need to think
about how we, as implementation scientist
can remain scientists and bring other people
up to scale as implementers.
DAVID: Oh.
So, thanks, very much for all of those introductory
comments.
I'm going to see if the okay.
Yes, we still got we got it now hundred nineteen
sponsors weighing in on what could the implementation
science consortium accomplish?
And now we are shifting to the question that
I will both ask our panelists, but also, all
of you to weigh in on.
And that is because that what I've heard from,
from the initial comments if the question
is, are you concerned about this potential
gap?
I think I've heard four yeses.
And so, as we now shift to saying.
So, what can we do about it like, to ask all
of you, wherever you are to weigh in but also
our panelists a way, what would you say is
the most important thing I think yeah.
What is the most important thing that we need
to change about the current research to practice
pathway?
And again, we will be collecting this.
So we very much appreciate what you have to
say on this.
And what our panelists start us off with.
RINAD: How turn it on all right?
So, I just want to kind of reiterate and maybe
we pause for a second to say, I think it's
little an ironic that we are in danger of
recreating the research to practice gap in
implementation science that happened with
intervention.
So, I think that this is just such a salient
topic, and for those of you, who know me,
you know, that a little bit impatient in life.
And as I noted, in my previous response, I
feel like things are taking too long and it's
taking too long to get to actionable information
for our practice colleagues to know how best
to implement.
So, I'm going to say something, perhaps a
little controversial.
Plus I want to keep everyone on their toes.
And the thing the suggestion I might make
is that we consider doing away with the efficacy
trial.
I’m going to say that again, I think we
should consider doing away with the efficacy
trial.
Because I believe that the true test of any
intervention is the intervention in the context
it was intended to be used.
So that's my suggestion would love to hear
people's reactions.
If we have to keep efficacy trials, which,
I suspect will be the case then I think it's
very important that we think about designing
for implementation from the very beginning
and that folks who are developing interventions
do so in the settings in which they were intended;
alongside clinicians who know what it's like
to deliver those interventions and with patience.
And I think that that would get us closer
but maybe let's do away with efficacy trials.
KAREN: Wow, I get to follow that.
How excited.
I agree.
So, in terms of kind of the gap that I am
concerned about the issue of how we consider
variability.
I think that what we need to do is really
understand that we aren't the holders of the
knowledge that needs to get translated.
That there is actually knowledge among the
practitioners and practice settings that we
work with and we treat tacit knowledge sometimes
like, kind of a dirty sock, you know, it's
like it's over there.
Okay?
We kind of pay to attention it but we don't
embrace it, we don't try to understand it.
And I think time to learn much more from what
our practice settings are already doing.
I'm doing a lot of work these days in community
centers, health and most community health
centers are actually doing quite a lot to
try and get people screened for colon cancer
and they're trying to do lot an around, trying
to get kids vaccinated for HPV.
And but we know what the evidence says right?
So, we kind of mark in with our little evidence
toolkit, say, what are you doing against this
set of things and try to get you to do these
things?
And I think that is a mistake because I think
there may be many things that we can learn
from how the settings already are effectively
moving the needle; and it may give us ideas
for other evidence that we should generate
and other evidence-based strategies that we
could learn about as well as how do we take
the strategies that we believe work and and
get them embraced in those settings.
Okay, well, David, I have about six answers
to a question that what I'll do is a good
politician is focus on one now and then later,
when you asked me a question, I don't know
the answer to, I'll bring that in that at
that time.
RUSS: Well, first of all, I just have to say,
Rinad, you really made my tail wag and my
day here because as you and a few others know,
I've been trying to argue something, not quite
that radical, you know, for years and largely
beating my head against the wall, so so, thank
you.
That thing I will do for now to try and be
brief, is to try and explain a little more
or dig a little deeper into what I meant by
transparent reporting and why I think that's
so important to address the replication crisis.
What it is, is that most of us, and I'd say,
even those of us in this room from the literature,
I read, are really doing T3 research.
I'm not sure if it's quite T4.
And sometimes it's at the interface.
It's usually a half or a third of the way
there.
And what we do because of wanting to show
that we are a real science, you know, which
was a struggle for many years.
But I think now we're accepted; is we don't
report the full story.
And by that, what I mean, is we don't report
will all of the settings that you could work
in the health systems.
We’ll, usually, what we do is we exclude
those, like, for example, that we know aren't
stable.
We exclude those.
That don't have a good electronic health record.
We exclude those.
Maybe haven't worked with before or a kind
of chaotic.
Then we get down to the staff level, and we
often take only people that have a certain
level of training or are willing to participate
in a certain level of training.
And what I want to emphasize is there is not
a single thing.
the matter with that, but the problem that
I see, and then I get upset about is when
you never report that and you kind of shove
it under the rug, including who do you exclude
before you ever get there and the representativeness
at all this multilevel thing and how I think
that’s related to the replication issue
is, I think many things that go on really
aren't a failure to replicate.
It's just a different context that you have.
And in some ways, you're almost studying a
different problem.
And the last thing that a now shut up getting
back to Karen’s excellent point, is another
thing we don't report is variation.
Because we learned that heterogeneity is bad.
And so we never tell the story about how things
were different across different centers and
things we have.
RANI: My comments really related to the capacity
issue I brought up and I think that in the
VA were doing a really, we're making a lot
of effort toward this, but I would love to
see more of the people who are going to be
involved in our implementation efforts long
term be involved from the beginning – patients,
providers, administrators, leaders.
I don't want them to be such people we talk
to during permit evaluation or people we follow
up with during summative evaluation.
I want them to be part of the team, I want
them to be engaged, I want them to be doing
some of the work, so I think that that's what
we should be focusing on and perhaps my lead
to less of the kind of challenges that I've
had in terms of the implementation work that
I've done.
DAVID: Cool.
So, thanks for those questions and those responses.
And thanks to the 49 folks who so far, I have
weighed in as well.
One of the recurring themes, at least that
I'm seeing as these, as the scroll is happening,
is a lot of discussion about practitioners
or about stakeholders, about the c-suite,
about the whole range of different folks who
I think wherever you are and you're typing
this in our reflecting is is a need.
And I wanted to see if the panel has any comments,
not just about, who should be involved?
But is do you have any sense of what are the
arguments to make?
Or what is a, what is an approach that could
be used to best try and, you know, meet busy
folks who have other things that they're trying
to deal with better so vital to making sure
that we get the questions right, we answer
them in the right wa?
So sort of a slight left turn to not only
focus on who, but any advice or any experiences
that you have, in sort of doing a good job
of of what I see is a true need to make sure
that that as many stakeholders are engaged.
RINAD: David is going rogue.
That wasn't on our list.
All right.
DAVID: So pulling back the curtain a little
bit.
RINAD: Soon.
He's going to ask for karaoke and then and
then no.
All right.
So so, my perspective is that we should be
casting the widest net possible with regard
to who we include in our conversations about
implementation of various practices.
And I think there's a particular group that
can be very difficult to get to but is critical
for us.
And that is stakeholders who might not agree
with us or those who may have a different
perspective than us.
So, I'm going to give a concrete example of
this in my own work.
I've been doing some recent work on firearms
safety, promotion and pediatric primary care
And we had an R21 from NIMH.
There I go talking about mental health again.
And we decided we were going to talk nine
stakeholder groups, anyone who we thought
would be impacted by implementing a fire safety,
promotion, intervention and primary care.
And then, once we started doing that work,
we realized we had left out a critical group.
And that has those individuals who identify
a firearm stakeholders.
People who are fire safety course instructor,
representatives from national firearm rights
organizations.
Those individuals who are typically, they're
perspective are not represented in research.
It's a very tough group to get.
Of course, we had thought, let's talk to firearm
owning parents, but we certainly didn't think
about, maybe we should be talking to, you
know, people teaching safety courses are people
from the NRA.
And so we added that stakeholder group and
I have to tell you that, in that experience
the responses that we received from a very
difficult group to recruit, but we were able
to do it was tremendously informative, if
not more informative and talking to the other
usual suspects who we talked to in our efforts.
And so I would make a case for making sure
that you're involving anyone who would be
impacted in the implementation and that you
purposefully seek out people who might not
want to talk to you, and who may have a very
different opinion than you.
And the way that we did, that was just by
being very transparent that, you know, we
wanted to identify shared priority, and in
our case, our shared priority is keeping kids
safe.
It's not about gun control.
It's about gun safety.
And by identifying that shared priority and
that's something that state the stakeholders
had, we were able to kind of move forward
in that way.
So that's my answer.
KAREN: So, I totally agree with the idea of
big tent stakeholders and really trying to
think through all who are involved.
And I think it's also about understanding
what your potential partners priorities are,
I think you gave a really fantastic example
for that.
Years ago we were doing work in low income
housing settings, and we were focused around
colon cancer prevention and you kind of go
in and sit and talk to stakeholders in these
settings and, you know oh, we're colon cancer
prevention.
They kind of glaze over and then you start
to say, you know, a really important strategy
is trying increase to physical activity, you
know.
And they kind of glazed over and then you
say, well, you know, you this community has
really lovely grounds, you have a lot of space,
and then they light up and they say, we really
need to do things to make sure that people
use our property in ways that are promoting
good community norms and not allowing it to
be taken over by gangs and by criminal behavior
and it's like.
Oh, physical activity becomes a strategy for
them to basically increase the safety and
proper use of their environment.
And so it's about going and not saying I'm
here to talk to you about colon cancer prevention,
but I'm here to talk to you about what your
goals are for this community.
And what are the things that you're interested
in?
What are the priorities?
And then trying to figure out that point of
intersection.
I think sometimes we're so quick to go in
with our thing and they have a lot of things
on their plates that they are already doing
as practice setting.
So, I think thinking really carefully about
how we do their approach and engagement that
you have with those folks over the long haul;
so that you can actually, you know, develop
with them and grow with their interest is
really important.
RUSS: Well, don't I know that I have that
much to add about who to be involved other
than, you know, all of the above and particularly
the multilevel perspective.
It truly does take a community.
I think the things I might reflect on and
this is just off the top of my of head.
I can't say I've done this, but as reflecting
on when, and how we involve different stakeholders.
I don't think that we need to have everybody
there at every meeting and God forbid, you
know, we start with the infamous dreaded focus
group.
And then that's kind of all we do, but I think
we can think of subgroups or individuals that
might be involved for different things, like,
who might help you think about how you're
going to reach or engage the target audience?
Who might be the best to help to think about
something that's feasible to implement?
Who might actually help you with some user
centered design?
Who might towards the end really help you
with thinking about policy and sustainment?
So, I think thinking individually, not just
always collectively at the same time, but
how to most strategically use people and talking
with them early on, might be one of the best
ways to do that.
Because, I don't know, we have a real science
to say how to do that.
RANI: So I think timeline is a big issue when
it comes to, involving the right stakeholders
and, you know, trying to practice what I preach
I remember working with primary care clinic
to involve them in a project and I was meeting
a lot with the primary care service line director.
And when, you know, I wanted him to be a member
of the team.
I didn't want him just to write a letter of
support.
I wanted him to be a coinvestigator.
I wanted him to help recruit.
I wanted him to, you know, be part of the
planning and and he finally was excited and
bought into it and goes great.
When do we start?
And I said I have to get funding.
So, you know, that was a challenge that once
you finally are ready to get people involved
and they're excited, they, this whole funding
piece is a challenge.
So, you know, I don't know how to do the rapid
funding and we've done some of that in the
VA, but it's still not necessarily rapid enough.
So I think that that's part of the challenges
that I have faced in my work.
DAVID: And thanks again to the many of you
who have put your thoughts in and again, you
can you, hopefully, it's not too distracting
as you've seen these things scroll up.
But I think it, you know, it really helps
us to see again the, who, the, you know, and
and again we, there's been a fair amount of
discussion about different stakeholders outside
of the, you know, research community.
There's also prior comments questions about
even within the research community, there's
such a diverse set of perspectives of expertise
that we want to bring to bear.
So it's it is a truly large tent and somewhat
daunting to try and figure out.
How do we, how do we make that work?
According to everyone's to timelines?
Anyone's anyone who knows, you know, scheduling
like a meeting or conference a call with three
people can be a challenge.
Some of you who are helping us get this ready
found that that it was a challenge.
And so now we're talking about casts of tens
and hundreds and thousands potentially.
We’re going to move to the next question,
which again is for everybody, and we're going
to start out with folks, you know, at the
table and then try and see you all have table
mics who wants to weigh in on this one.
But if we think about what we could potentially
accomplish, as the, as a consortium around
this gap, what do you all think that we collectively
should focus on?
What suggestions do you have?
What could the consortium do around trying?
You know, because it does take a lot of people.
There are a lot of people in the room.
There are a lot of people who are listening
online.
And so, what advice do you have, or what should
we focus on as a consortium to try and address
what seems to be this persistent gap?
And when we yeah, so at the table, we'll start
with Rani, and then we'll move along and then,
and then we'll see what folks here want to
add to that.
RANI: I was ready to go last, but okay, so.
I would love to see more people involved in
implementation science training projects that
are not normally.
I know Aaron Leppin is here, and what I love
about what Aaron is done and some other colleagues,
Beth Petrakis, who's going to Wash U, is they've
done this really terrific policy fellowship,
the health and aging policy fellowship.
And I think by doing that they're getting,
you know, they're learning a lot about legislation,
they're learning all about policy makers,
and I would like to see those types of people
get involved with our implementation work
especially work that involves policy implementation
obviously.
So, you know, we haven't reached all of the
people that we need for that research to practice
pathway and having been involved in some policy
implementation work within the VA.
I had to work very hard to bring those people
to the table.
But if it were more of a standard practice
to involve such a that type of stakeholder
in our work, that would have really benefited
me.
So, I think that if we can encourage implementation
scientists from a, from their training stage,
to learn more about what policy makers are
doing, what matters to them.
As part of the project, I actually went a
little rogue as well and started interviewing
congressional staff, who are part of the House
and Senate Veterans Affairs committees for
a policy implementation project that I was
doing.
It.
It really worried the healthcare organization
where I work that I was doing this; because
this was not a normal activity, but they were,
they were research participants and they agreed,
they consented, and I learned so much from
talking to them that really helped improve
our evidence based around our policy implementation.
It helped us gain the credibility that we
needed to be able to implement this particular
policy and study it.
So, I think that that was a really critical
part of what we were doing.
RUSS: Thanks, I think as one of the really
important collaborative activities we could
do to create products and activities for the
common good would focus on common measures.
And I just that's kind of a theme you'll hear
throughout the day, but I did want to check
David to see can we allowed in the current
political climate to say common good or the
public does that that's still okay?
DAVID: Yep.
The principles of this meeting our inclusion,
adversity.
RUSS: Thank you.
Thank you.
DAVID: So absolutely say what you like.
RUSS: All right so, by common measures, I
mean, a few different things.
First of all, I don't think that there is
one best measure, you know, I think as most
things in life it depends.
But what measure might you have to share this
for different theories that you're doing?
For different settings?
I'm particularly a fan of pragmatic and actionable
measures, but I think that we could share
a lot of these and again, transparently report
their strengths and limitations.
I think a particular area has to do with the
cost resources in the burdens of the interventions
in the implementation strategies that we do.
And the good news is, I think there's some
vehicles that are already out there that are
great starts.
The SIRC collaboration, certainly, with their
recent focus on measures that have both good
psychometrics and are pragmatic as great.
The NCI’s own grid enabled measures project-
GEM - that was mentioned before is another
place that could use some updating but I think
would be a great start on common measures.
KAREN: I think if we're going to close the
gap, we really need to think a lot about how
to bring more tacit knowledge in a more regular
way.
There are large numbers of people doing implementation
every day that we don't cross paths with the
State Health Department's, Local Health Department's;
Electra Paskett and I have been working on
a project, developing a project in vaccine.
And I had the opportunity to go meet with
our State immunization program and they are
doing unbelievable work and that they have
a quality improvement program.
That it really, I mean, I just have to say,
I'm going to embarrass I was just blown away
by how much work they're trying to do already,
in exactly our wheelhouse and we have no way
to capture that except build relationships.
What a concept?
So, how do we start to build the mechanisms
to bring that knowledge a more in part of
our regular discourse?
RINAD: So, mine is a little bit selfish.
But now that implementation science is seen
as a legitimate science.
You know, I felt like I spent the last ten
years of my career, trying to convince people
that were the case.
And now people agree.
And I think much of it is due to many of the
funding opportunities, and all of the frameworks
and outcomes, etc.
But I feel like, we need more support for
those individuals who are building capacity
in their institutions and health systems.
So, I started tracking how many queries I
get a month for implementation science support.
And it's upwards of 20 to 30 people a month
who are reaching out internally, and externally,
nationally wanting this kind of expertise.
And I think there's a real need for capacity
building and NIH has been wonderful at producing
training institutes and other institutions
are doing the same.
But I almost would like to see kind of like
meta support system for the people in their
institutions.
And we've done this informally, but not in
a formal way, with regard to navigating building
capacity.
Particularly for folks who might not be full-chaired
professors who have the, you know, the luxury,
perhaps, to spend most of their time doing
that work.
So, that would be mine.
DAVID: Cool.
So this is still going so please continue
to, you know, to input your thoughts about
this question, what do we need is a consortium?
In addition to that, we just wanted to give
folks around the room a chance to weigh in
on the discussion.
What are we missing?
What do you think we as a group?
Again, as we're carrying through later in
the day, the specific topic discussions, the
brainstorming, we don't want to lose sight
of this broader issue.
So, hopefully, it will infuse itself into
your ideas, your brainstorming, your expansion,
you know, who should be at the table.
What can we do to reduce this gap?
But just wanted to see what if folks want
to say, who are at the table?
I'm looking for hands.
You've got table mics.
I see one already.
And we'll try and get to what we can.
We have about five, ten minutes we'll take.
Right.
RANDI: Thanks.
Randi Schwartz.
And I think, I mean, I just want to piggyback
and everything that's been said here, because
if you look around the room, it's mostly researches
we need to practice community, engage.
I’m always bugging David and Bob about we,
as a practitioner in this.
And there's only a few practices-based people,
even in the room.
Steering Committee for this is research is
steering committee for the annual conference.
So, I mean, it's real and I liked the comments
up, there were great - 50/50.
Now Larry Green’s quote on research in evident,
research in practice is a two-way street.
So, the users the consumers, the people who
implement only to be part of this, because
that testing that redoing that in Karen’s
comment was one about the state health department.
There's all this great work about testing
and retesting and quality improvement and
trying, but really getting that evidence-based
research into intervention practice.
It is critical to have practitioner and the
community participation and the researcher
well, engaged.
So yeah on a good track here.
DAVID: No thanks for that.
And again, a great.
Okay, so I'm just going to start.
I'll postpone my thought and just start getting
around the room.
Yeah.
So Jenny
Jenny: How is that?
Okay, oh, you heard me then.
I want to build on the issue about disseminating
and local capacity and it strikes me that
so much of the crowd of excellence has been
the training that has happened centralized.
And many of our institutions have been able
to send one, or maybe two year.
And I'm wondering whether or not the next
iteration of this is sort of a train, the
trainer model with some standardized curriculum,
an annual refresh about new content.
And maybe even some technical assistance for
building, either boot camps or extended collaboraties;
because I think if you're going to both the
practitioners and another group are the clinical
operations, and the hospital operations people
who are like, they're just discovering message
framing.
And so, you know, they're doing a lot of this
work, but they are not cross talking with
us and I think that's another way to really
extend the breadth of our impact is by already
the people that are basically hard money need
to improve the cancer, the cancer care delivery
and creating opportunities for us to come
together with training and dissemination.
DAVID: So, where we can make materials available,
figuring out models for which there's the
support.
So that folks are spending their time, not
on top of everything else they're doing, but
ways in which the training materials can get
scaled up.
I mean, try we to do this, train the trainer
for a while with all of our training programs,
but it's a lot to ask for one person.
Of course, to go back to their Institute has
worked in certain ways.
But in other institutions, it ends up being
that end of one and I think you're saying,
how do we scale that up a bit more okay.
Oh, the yeah.
DAVID: So the mics are at the table and if
you’re on deck maybe grab the mic.
RITA: I'm Rita Kukafka, Columbia University
and thanks my first meeting.
So so I as I'm listening, I'm thinking about
what we did.
So, I'm in the Department of Biomedical Informatics
and similar problem when we started, I started
in I guess ‘90 or 2001 as a postdoc fellow
coming from public health.
And none, the of clinicians, you know, knew
what Informatics was, which is a far leap
from them.
So, what we did is to remedy that, we took
the training, and we attempted to move the
training into curriculums that would never
be exposed to Informatics; like medical school
curriculums, nursing.
And I think that I know, it's hard; you can't
introduce a course in implementation science.
But there are ways to build the competencies,
move it into training, into training of professionals
that would never even know what the word exists.
We've moved from there to now having a funded
clinical informatics fellowship.
So we now have medical students who would
never ever even understand what the word was
now doing full one- or two-year fellowships
in the field.
DAVID: Thanks.
LINDA: Hi, Linda Fleischer.
It's really with striking me in some ways
or two things.
One, I think they need to really be working
at a national level with other organizations
who, you know, to get to the practitioners.
Because I think this is not a, this is not
a new problem in many respects.
I feel like this is a problem we've had over
the years in terms of research and practice
and practice.
So I think that's one thing.
I think the other thing it kind of is interesting
to again, also with gray hair.
You know, NCI had a whole program on using
what works to get evidence-based practice
into public health, many, many years ago.
And I think not replicating that.
But I think again, this idea of how do we
develop the coterie of trainers who it's not
the researchers.
I think again, I think for researchers, we
need to be focused on our research.
It's it is that same issue.
I think we have a great need to do this and
move things forward, but I think we have to
find the partners both of those organizations
and the mechanisms, to have the practitioners
who can really take these next steps.
And so I think they are two things maybe to
be thinking about.
DAVID: Okay, thanks.
So, 1, 2, 3, 4 and then, did did everyone
get that?
I'm sorry, I'm pointing at people and I apologize,
but I think, yeah, one, two, three and four.
If, because we're right toward the end of
the session we want to move up to setting
up the afternoon if you haven't yet had a
chance to weigh in please continue to weigh
in like I said, this Alt+Tab for me.
Yes.
Okay.
So so, if you're not part of that group, please
can, it will keep this question open so that
you can weigh in.
If we don't get to you apologies for trying
to keep things running.
But, yeah, let's please…
RUTH: We've neglected with oh, Ruth Carlos,
University of Michigan.
One stakeholder that we've forgotten is the
pair; particularly as we develop, not just
new reimbursement models, but new models for
patient related out of pocket costs as a way
to incentivize uptake of care.
DAVID: Thank you.
ATTENDEE: So, my, I work globally so I want
to remind us not to forget the global divide.
As we are implementing here, maximizing.
We also need to remember that there is a divide
in terms of mortality and morbidity and all
of those issues and take it also global because
a lot of implementation of copying of what
we're doing here is also happening out there.
So, we need to remember.
DAVID: Right?
In the chance to learn wherever we are in
the world that can reflect on our own questions.
Yeah great.
MONTSERRAT: Montserrat, is my name.
I'm from Cleveland Clinic, and I also consult
for nonprofit.
But I am new to this field, but I happen to
go to a different workshop if the University
of North Carolina a few weeks ago, and it
was mostly practitioners.
I come at it more from the research side,
so it's really interesting to see the contrast.
And one of the things that kept kept coming
up again again and that I see here too, is
that a lot of the sort of effort to include
to to serve, reduce that gap is to bring the
research to the practitioners in a sort of
like, where the teachers you're the student
kind of way.
And I understand that part because that's
what, you know, academics and researchers
is about.
But I think in terms of framing it as a reducing
the gap, it should be less about that and
more about finding sort of a more reciprocal
relationship where the researchers must also
go to where the implementation is taking place.
In the spirit of a learner or the student
to see what can come up out of that instead
of, I'm just the in spirit of here, I am to
teach you the ways of implementation, especially
because a lot of practitioners rightly so,
feel both were the ones doing it.
So well, what you're doing is writing papers
don't come and tell me what to do.
And, I mean, like I said, I'm very new to
feel that my background is in anthropology.
So that relationship between sort of expert
and informant is something that I find very
familiar and I see it replicated here in a
slightly different way.
So, my two cents there.
BOB: So, just to reflect off of that comment
so we just very recently few days ago.
I had a meeting between NCI leadership, and
all the leadership of the Health Resources
and Services Administration.
So the HRSA administrator, and all the directors
of all the, you know, primary care, the clinics
program, Ryan White, though, the whole leadership
of HRSA.
And I posed a question to them.
I said, what do you see as the number one
barrier?
You know, among all the members of the HRSA
leaders, from number one barrier between effective,
academic, clinic, research, collaborations
and partnerships.
And and unanimous agreement among every one
of those people, who are not even sitting
in the clinics but they're, you know, here
in the Washington DC bubble.
They said consistent, persistent condescension
from the act from the academics you deal with
and you serve to our constituency.
And so we've heard this in the listening to
Shoba and I have done in rural health and
rural health organizations; the same thing,
but they said, they said, they said just more
daily, hundreds of complaints, the academics
who are collaborating with, you know, FQHC
and other clinics.
And that is this, just underlying condescension
of where the experts we know what's better,
we're here to help you do what's evidence
based whatever.
And they said if there's one thing you and
NCI, and NIH could work on with our constituency
of clinics, serving underserved populations.
It's so that's why I'm doing.
They said, please, next time you have an opportunity
to talk to your academic constituency; please
raise the issue of this constantly condescending,
insulting condescension that our clinic staff,
our clinic directors or clinic medical directors
hear all the time from academics.
DAVID: So, thank you.
ATTENDEE: I have two points to make.
I feel as someone who not originally trained
in mental health or cancers, there's like
a severe lack of training capacities, in implementation
science and in other fields.
And I think we can still learn a lot from
other fields as well.
And the second point I want to make is a sustainability
aspect.
So, I work with community organizations, and
I feel like, unless they're part of our research
grants that we can build them in, they don't
have the capacity to write grants to to really
implement what we, what we like them to implement
at the community setting.
So, unless they're part of a larger public
health system, like, it's like it, or they
get, like, have I, a contract with the cities
or somehow.
So they don't have to constantly worrying
about grading the next grant and what, you
know, it, that at the community level.
They don't have the capacity like we do here.
I don't think our evidence-based interventions
will really go very far at the community level.
DAVID: So maybe thinking about different resources
that would make it easier for somebody to
be in a position to be able, rather than having
to start from scratch and figure out - How
do I navigate this whole grant process?
If there's a way to lower the bar?
I guess and…
ATTENDEE: Yeah, like, maybe like, you know,
have worked with this.
If this is, like, a cities concerned, maybe
have the city's work with them, or somehow
so lower their, their, you know, their need
to constantly seeking resources to support
the staff.
DAVID: Right okay.
And, like I said, we're I think we're keeping
the question open that right?
Thumbs up?
Thumbs up we're is up we're keeping the question
open.
So that if there are other things that you
want to add to this, please do again, we're
relying on this and you'll see it reflected
back as we continue through.
But hopefully you also take on the emergent.
ATTENDEE: So, there is a subset of cancer
control topics interventions for, for which
there is nobody in the health system to do
the intervention, the two that come to mind
first are diet and exercise.
And so there is not actually currently a good
way to do high quality, I do resistance training.
So, there's no way to do high quality resistance-training
interventions that are implementation science
projects, because there's no way to implement
them.
But we want to move these evidence-based interventions
and I know I'm not alone in this, into practice
and so one of the things that I that I keep
kind of running into.
That I, maybe I'm just too new to all of this
and you all have figured this out, but conversations
with others in my field agree that this is
a catch 22 that we don't know how to link
with health services research.
So we have this.
We need to get the intervention paid for,
in order to do the implementation science.
But, in order to do the get the, get the intervention
paid for, by third party payer, we need to
how we can implement it?
And what it costs?
And we need to test that implementation.
DAVID: Sure.
Right so chicken eggs start with this.
But you need that in order to start with this
and okay, so great.
Yeah.
ATTENDEE: So there's, there's a whole kind
of need for connection of fields.
And I think that there's a little bit of work
that you talked about before connecting IS
to health economics and I feel like there
is value perhaps two more connections with
health services and IS.
DAVID: So great segue.
Thank you.
And thanks to our panel.
Because what we're now quick quickly moving
into with the help of our facilitators is
a sort quick of pitch to set up the conversations
that will, that will take place after lunch.
Where each of the seven groups of facilitators
from each of those groups is going to give
you a very quick pitch before we allow you
to get enough energy.
SARAH: So tremendously quick.
DAIVD: Yes, so tremendously correct.
SARAH: Perfect.
DAVID: Russ, don't go too far, because you're
actually first.
RUSS: Okay, yeah, that's right.
Thank you.
I'm a Mac guy, so I'm not sure about.
Oh, there we go.
Did work.
Okay, thank you.
We're told we have three minutes.
Most of us have way too many slides for that.
So, that I'll just start quickly.
The group that David and I are going to co-facilitate
the next couple days, it's called Implementation
Laboratories.
Many of you that applied for the P 50s will
recognize this, but those of you, that don't
know it just to set this up.
Basic science has well established and usually
well-funded by NIH and others, well-equipped
laboratories that are there in an ongoing
basis.
So you don't need each new study to start
anew.
There's really no parallel in implementation
science.
There are some related things that I'll mention
in a moment, but no real parallels.
So, instead of having to reestablish connections,
and I would say in particular trusts, and
agreement, and data sharing things each time.
Wouldn't it be nice if we could get around
this huge investment that's required upfront
of both time and money.
So, implementation laboratories I don't want
to get hung up on technical definitions or
wordsmithing, but basically involve an ongoing
relationship, an agreement between research
institutions and community or clinical settings.
And editorialize a bit, because ideally, I
think it would be a community and clinical
settings for real population impact.
Basically, four things we're going to do.
And then I'll just give you a sampler for
some possible examples.
Discuss what are the key characteristics of
such a lab?
What can we learn from other fields?
Many of you have already made comments and
there are some particular things not exactly
alike, but related.
What can we produce for the overall field
for the common good?
Not just for people part of this collaboration,
but ongoing and then, as mentioned before
across laboratory collaborations.
So just for some quick examples.
So, ideas to get people started in case, you
want, come to to this group in terms of thinking,
what might the key characteristics be?
Well, what level and type of stakeholder engagement?
Because there'd be templates for data sharing.
I think that there is a, are a number of activities
in work that's been done out there already
that we can learn from.
I don't know that any going are to translate
perfectly, but work done in learning health
systems, practice based, research network,
embedded research programs, etc.
To reiterate something that Bob drove home
this morning.
I think a way to try and address or differentiate
those that use the term implementation lab
on the same way that you use the term learning
health system or even implementation science.
Okay.
How to differentiate those, the, that just
use the word, because it's PC versus deliver
on it.
I think I can skip most of this, but I think
a couple of other key issues there, I'll leave
you with and I'm sure my times up.
Is to discuss what's in it through the community
and the clinical partner?
We kind of know what's potentially in it for
us but taken that perspectives.
I think collaboration not only with our individual
labs, but across studies would be good.
And then segue that was just mentioned a little
bit ago in terms of sustainability.
What mechanisms might there be in times of
lower?
No external funding to to keep this going
so you don't have to invent it again.
Thanks.
SARAH: Well, Policy, you're up next and Econ,
and Costs, you are afterwards.
CYNTHIA: Always a pleasure to go after Russ.
So I am one of the three cofacilitators for
the policy work group.
Karen Evans and Bob Vollinger are my partners
in crime.
So this report, I don't know how many of you
are familiar with this.
This is actually the US Commission on evidence-based
policymaking, which was established by the
bipartisan evidence-based policymaking making
Commission, which is probably unusual in this
day and age.
This came into effect by an act in 2016.
And it was signed by President Barack Obama
on March 30th, 2016.
The reason why I bring this act up is that
it directed the Commission to consider how
to strengthen the Government's use of evidence
by using the data that we have available.
And it's a policy that the federal government
has, but the challenge is, is it's really
focusing on using data, but not, how implement
the evidence that we already know works.
Which I feel and I think most of you will
feel is sad, and it was a missed opportunity
and implementation science.
So, let's see.
Let's go here.
Our work group has really trying to figure
out how we can focus in three different areas.
These areas that we should consider exploring
are the evidence-based policy implementation
and the three areas are the system in the
context: where the evidence-based policy is
being implemented; the actual policy that's
being implemented, so the intervention; and
then who the what the policy strategies are.
So, what you're seeing here are the systems
and the context of evidence-based policymaking,
and this is very important.
The systems where the policy is implemented
differ?
It can be at the national or the international
global level.
It can be at the state level, the county level
or workplace schools, etc.
So, examples that can be noted here include
all the different types of variation across
where these policies are being, where the
uptake of these policies happen.
So, in the US, most of you are familiar, there's
different state, tobacco, excise tax policies,
and coverage of comprehensive smoking cessation
treatment by state Medicaid programs.
It's also worth noting that potential connectivity
of evidence based, policymaking, across different
levels.
Lower or more local levels of policy can be
a catalyst for the spread of state or federal
level policies, as we've seen in many different
areas of tobacco control.
Most recently the spread of tobacco laws in
by there's 20 new 21 new laws that have been
put into place.
HPV vaccination is another example where these
laws have been put into place.
The second level is really the evidence-based
interventions.
So these interventions, many of us at NCI
are familiar with the RTIPS interventions.
How many of you are familiar with RTIPS?
Speaks to my heart.
So what you're seeing here are just some of
the policy interventions.
There's not a plethora of policy interventions
on RTIPS website, but we do have policy specific
interventions that are available.
And this is just a sample the of types of
policy interventions that we need to figure
out.
How do they actually get put into policy practice?
So we have a tobacco control policy for tribes,
we've got how to actually implement walking
trails.
And then the one of the most popular programs
on the RTIPS website is body and soul program,
which while it's relatively old, is still
the most frequently downloaded program for
implementation of healthy eating in churches.
And then finally, there are the strategies
for policy.
So, I've highlighted here the Virginia Cancer
Control Plan, and the state cancer control
plans are mechanisms where laws and policies
can influence the uptake across the state.
And how many of you are familiar with the
CEO gold standard?
Fewer than our RTIPS.
So this is a workplace policy, where you can
get registered as a gold standard company.
NCI and NCI is registered as a gold standard.
We've put into place policies with the types
of workplace interventions that we have our
cafeteria, our gym, the walkability of our
stairs, nonsmoking on campus.
The the CEO Roundtable will actually certify
companies, and its way a, to recruit people
that you're certified.
So that's just another example of different
strategies for implementation of policies.
Our work group has different ideas that we
started to think about, but we're hoping to
get more ideas generated in these breakout
sessions and you see them here: How do we
develop policy?
Relevant evidence?
What are the frameworks that can that can
guide policy implementation?
What about measures and metrics specifically
focused on policy?
And we're looking forward to those that are
assigned to the different sessions that we
have today to generate more ideas and projects
that we can work towards in this arena.
So I'm going to now turn it over to Wynne.
SARAH: No actually economics.
CYNTHIA: Economics.
Okay.
SARAH: Thanks.
Wynne you're up next.
HEATHER: I'm delighted to be here to represent
our group on will be focused on economics
and costs and I've been told to put in a little
disclaimer if you're in our workgroup it does
not mean you need to be an economist.
Because we're working across disciplines and
trying to help each other further the field.
So I just want be to very clear about that.
In defining the scope of this issue for the
consortium today.
There's a paucity of methods and applications
for economics and cost-effectiveness analyses
in implementation science.
And this lack of guidance informs our desire
now to focus on methodologic developments,
capacity in the field, who's going to actually
be doing this work, thinking about measurement
issues and start to think really hard about
creating definitions for the field.
And the scope of implementation versus intervention
costs who bears these costs etc.
And remember, this is just a snippet today.
I teach a whole class on this.
So when we're thinking about the components
of implementation costs, we have the intervention
costs, the implementation strategy and the
costs that go along with that and setting
specific costs.
And, and importantly, it's not just the intervention
cost depending on how far out you're looking
there other downstream costs that don't look
like intervention cost, but are related to
the intervention.
And we know we need to it's come up a lot
today, which totally excites me, that people
care about resource use and costs.
And these cost analyses and cost effectiveness
analyses are critical to get stakeholders
to buy in, literally.
And it helps us think about the feasibility
of implementing different evidence-based interventions,
but we lack this guidance to to be able to
value costs.
Oh, and the last point is important too, that
the, the costs by setting and by stakeholder’s
perspective may differ and so you might actually
conduct several different costs analyses in
your implementation science study.
And that can affect which costs you include,
can affect adoption downstream, and it could
also potentially perpetuate disparities, which
is something we care about a lot.
And so some of the example ideas, remember
brainstorming, so, these are just things to
start with or to think about.
It looks like, we may need a review of the
literature that reports on implementation
costs, cost effectiveness analysis, and implementation
science.
Portfolio analysis, what grants are including
costs?
How are they, including costs and is it appropriate?
And are they comparable across studies?
We need to think about measurement and reviewing
definitions and measures of implementation
costs and all the other stuff that goes along
with implementation science and costs.
We should think about the different characteristics
in high versus low resource settings, the
characteristics of decision makers that can
affect the implementation costs, and clearly,
it's come up in this morning that we need
training opportunities for economic evaluation
and implementation science.
It's not just bean-counting.
There's theory behind this.
We take from other fields and can apply.
I think that's it.
SARAH: Wonderful.
And then we've rapid cycle design next to
be followed by technology.
WYNNE: Good afternoon.
Thank you all for being here today and the
next couple of days.
My name is Wynne Norton.
I'm here at the NCI.
I just want to thank Brian Mittman and Donna
Shelly for being great co-chairs on this proposed
and over the next couple days, rapid cycle
designs and implementation science.
Sarah, to confirm I have three minutes and
if we stay under, I'm told, there's a cash
prize for a team.
Yeah.
Okay.
That's what I thought.
Here we go.
Some background information, of course, those
folks know there's an increasing interest
in rapid cycle designs.
We've heard some of those this morning.
There's been recent papers published on us,
including the use of some of the established
designs and approaches such as PDSA, Plan
Do Study Act Cycles, which really originated
in early ‘90s from Deming and colleagues.
And relatively newer methods, most designs
smart design, so on and so forth.
And these types of designs are really suited
I think the argument goes and which is why
the planning committee has really focused
on this as one of the topics for the next
couple of days.
A really well suited for implementation science,
and for implementation studies.
It allows us to meet the needs of stakeholders
researcher, researchers and policymakers who
are really trying to work on an accelerated
timeframe despite some of the challenges,
as we heard earlier of some of those grant
funding mechanisms.
And it really fits within the local context,
and allows us to understand some of the adaptations
and some of the issues that occur within those
settings with the opportunity and potential
for generalized ability to other settings.
There's general variability and actual and
perceived rigor of rapid cycle designs kind
of on a continuum and depending on who you
ask.
And I think there's really an opportunity
here to leverage rapid cycle designs to enhance
rather than detract from research policy,
practice partnerships, in terms of the accelerated
timeframe as well as the questions that are
important to those stakeholder groups.
So this is a pretty broad scope for our group,
and certainly, we aren't going to be able
to cover all of this in the next couple days.
But certainly keeping this in mind is currently
as well as when the field move for field moves
forward.
And we look forward to different collaborations
and opportunities after this consortium meeting
to advance these issues.
So, some of our objectives here, it's really
better clarify and understand the features
of these different rapid cycle designs.
What types of designs are best suited for
particular implementation questions?
And in what particular context involving what
particular stakeholder groups and the cycle
or timeframe within which that might occur,
develop best practices and guidance for those
who are relatively unfamiliar, or not trained
in depth in these particular designs, and
highlight additional information and research
that's needed to advance this area of research.
And contributing, not only to implementation
science, but other areas of our research areas
as well.
So some example ideas, including, but not
limited to these, of course, that we've been
thinking about for public goods that would
come out of this.
Identify and discuss the range of rapid cycle
designs, which could we, or could include
a systematic literature review or portfolio
analysis, not limited to implementation science,
but informed by other disciplines in which
these types of designs are being used and
have been used.
Strengths, weaknesses and challenges of different
designs for particular settings to understand
the complexity and variability.
What types of designs best suited for particular
questions?
Or the phase in which implementation occurs.
Public goods around trainings and workshops.
We really talked about some of the practical
and logistical issues of IRB sample applications
and, I’m seeing some heads nod here, around
what how view this type of design, particularly
when you aren't sure, in advance, of course,
what you're actually going to be doing and
what you're going to be testing, and how to
art better articulate that to the IRBs.
Guidance for how to engage clinical leaders
in a way that doesn't isn't condescending
as Bob mentioned earlier.
But rather approaches them and says, we understand
you're doing this approach, how can we maybe
injector to include more rigor?
Again without being condescending, in a way
that advances the science in, in addition
to improving the practice.
And then develop best practices and guidance
for using these types of designs.
Did I make it?
Rats.
SARAH.
Next time.
Next time.
WYNNE: All right, there's one person left,
so
SARAH: A couple left.
So it's not the technology if you want to
move to the front of the room.
Precision health you will be after technology.
MICHAEL: Hi everybody.
The technology and implementation science
and policy small group led by April Oh, Rita
Kukafka and myself.
Will identify ways that the proposed consortium
can advance implementation science via technology.
And we're going to basically, on the electronic
record.
I would suggest that there's not an area of
implementation science that both investigators
in the field view with more promise, or experience
with more peril.
While our small group will focus on electronic
health records, it will also address other
technical, logical innovations, including
mHealth in digital health.
And in particular will address how data flows
among these devices.
We will examine two large domains first how
technology including the EHR, can facilitate
implementation science, but also will discuss
how technology itself can serve as a research
focus for asking and answering implementation
science questions.
I happen to live and work primarily in Madison,
Wisconsin, which is just five miles down the
road from possibly the epicenter of electronic
health record technology, Epic Systems Corporation.
Given this proximity, Epic frequently comes
up in conversations.
And if I, I've thought often that if I had
a dollar for every time, someone suggest,
why doesn't Epic, just fix it by putting Aor
B or C into their HER…I would be a wealthy
man.
In particular those of us doing clinical research
often, turn to the EHR to facilitate that
work and we're often challenged by it.
But sometimes we're rewarded with success.
In this technology can open up doors that
even a decade ago would have been just impossible.
So, using the EHR as a technology model, our
small group will discuss in detail both barriers
to using the EHR to advanced implementation
science and strategies to overcome those barriers.
The broad goal of this small group will be
to consider how the EHR and other technical
logical innovations can serve both as a research
tool and as a research focus to advance the
work of the proposed consortium.
And more broadly to work of dissemination
and implementation science.
And I did that with no slides that was great.
SARAH: Nice work.
Very nice work and so precision health you're
up next.
Bear with us we have two more and then we
are so close to being on time.
I'm very impressed with us collectively.
I have a few notes and then we'll break for
lunch so again, we have two more presentations,
precision health followed by context and equity
to get that.
Right?
Perfect.
ALANNA: Oh, good.
So, I'm not going standing between you and
lunch.
Okay, so thank you so much.
I'm here to talk about the Precision Health
and Big Data Working Group on behalf of my
co-facilitators Muin Khoury and Mindy Clyne
as well.
And so I want to first set our scope with
the disclaimer that the three of us, of course,
do have a genomics genetics lens on this.
But I want to be clear on our scope of precision
health and big data that while genomic medicine
is a big part of precision health and genomic
data is a big part of big data.
And I am guilty of using them to changeably
myself, that we are going with the broader
definition here of precision health as the
maximizing those outcomes important to the
patient, that the patient most cares about.
And minimizes those that the patient fears
the most; on the basis of as much knowledge
or data about the individual state as is available.
So that implies genomics, but includes everything
else environment, social, everything.
And that precision public health definition
of more accurate population, individual level
data, including genes, environment, behaviors
again, all of that data that is available
to help us guide at the individual level,
precision health.
And as you heard in all of that is also the
big data being those things that the health
record data, the clinical data, genomic data,
sequencing data, administrative data; any
of those data is that big data sets that we
can combine together again to make precision
health a reality.
And as we've just as a bit of background,
and to set the stage for where we hope to
brainstorm for this is we just really dipped
our toe in the water in this and to get it
sort of Rinad’s point of wishing we could
get away with not doing efficacy trials anymore.
This is actually a space where we're sort
in of a bind, because things are changing
so rapidly.
And so we have few very, really studies that
use frameworks.
Very few a lot talk about just uptake and
preferences.
So we're really just in pre-implementation
mode and lot a of this.
From both work that we've done recently.
Mostly around genetic testing, some around
somatic testing and cancer, mostly around
germline history, germline testing for hereditary
cancer risk.
The NIH grant portfolio also very small in
this area, lack of frame used to help us understand
and figure out how are we building evidence
while we're implementing in the context in
which this needs to happen.
Recently again, the Moonshot, only we have
one funded Moonshot and a second one that's
not funded under the moonshot RFA but is funded
under Moonshot funds on Lynch Syndrome Screening.
Big data we've got four going so far that
we found hybrid I focusing on symptom management.
Hybrid II, and a pragmatic one around colorectal
cancer screening at the state level.
So we're really just starting this out.
And so again, very much hoping that you help
us brainstorm the public good on this.
And some of the things that we were thinking
of and hope to expand on with you and your
thoughts, and this is, what do we need in
this area?
Do we need like a Rosetta stone to help us
talk with all of these different stakeholders?
We've done a little bit of work in that space,
but but I think there's a bigger need there.
Do we need to develop more methods with our
research ethics colleagues?
Again, thinking about the IRB how do we do
this in these learning laboratories in these
and healthcare system environments.
Where we have to do these pragmatic trials.
Do we need networks conferences to bring all
these people together working in this space?
Do we need to build bridges?
As we've said, Colleen McBride has started
doing between society behavioral medicine
and national society of genetic counselors.
Do we need to do that with our AMIA colleagues
and our communications colleagues?
Things like that.
So those are what we really hope to brainstorm
with you all this afternoon and over the next
two days.
SARAH: Next wonderful, thank you and so I
appreciate you all sticking through.
I know it is lunchtime.
We have one more presentation, and I have
two quick notes, and we will break for lunch.
And I will give you more details about that.
So thanks for sticking with us.
RACHEL: Hi, everyone last, but not least.
I'm Rachel Shelton from Columbia University.
I'm thrilled to be co-facilitating the group
on Health Equity and Context with Prajakta
from the NCI.
So I first want to make sure we're on the
same page when we're talking about disparities.
So health disparities are really about health
differences and inequities that are closely
linked with social or economic disadvantage.
And I think it's important to recognize that
in both implementation science and cancer
research, we've really focused on racial and
ethnic health disparities.
And I think the field would benefit from really
broadening the lens there and thinking about
the other dimensions through which social
dimensions through which inequities exist.
So I'll put that just kind of as our starting
place.
There's also been a move from not just thinking
about understanding and addressing health
disparities, but really, from a justice perspective
promoting health equity.
And thinking about what are the conditions
in which we can create fair and equitable
contacts and opportunities to be healthy.
And I think we've made progress on progress
in this area, but there is much more progress
it in context of cancer prevention and control.
And so where is context come in?
So, it's well documented that health inequities
and health in general is intimately linked
and influenced by the multi-level context
in which we work live and play and receive
care.
If we're going to move the, not the needle
on health inequities we have to deeply understand
context and address it.
And this is just one example of the multi
levels in which this might exist for cancer
care delivery.
I think the one level that's missing here
is global, which is important as well.
So Prajakta, I did a deep dive could we felt
like health equity and context is a very big
area, and we've actually done some foundational,
important work and implementation science
that we wanted to recognize.
So, we're on the same page.
So, I think it's important to recognize a
lot of our D&I conceptual frameworks have
put context front and center.
We recognize that context multi-level, dynamic,
frameworks, like, CEFIR, the Consolidated
Framework for Implementation Research and
many others talk about outer and inner context.
But interestingly, a, a recent review and
implementation science suggest that when we're
actually addressing and looking at intervening
on context, we're mostly focusing on that
organizational level, especially in the clinical
context.
So, I think there's a lot of opportunities
for think for us to think about how we can
better intervene at other contextual levels
I'm more explicitly in the field.
We also have made great progress and thinking
about adaptation.
There is a great paper that Cam Escoffery
and Maria Fernandez have done that have identified
some of our adaptation models.
And I think implicitly a lot of these adaptations
relate to cultural and setting adaptations
with the goal of addressing health equity.
And we have great frameworks for thinking
about reporting this.
And I think this will serve as a great foundation
for the field to advance this work.
And finally, we've gotten more explicit, there's
great momentum in this past year, and really
thinking about the intersection between health
equity and implementation science.
So, from Arkansas, even Woodward, and others
have actually published this year, the Health
Equity Implementation Framework, which actually
starts to merit to bring together implementation
science frameworks with social determinants
lenses.
So, I think this is a great foundational start
for the field.
And we've also seen people start to think
about the application of existing frameworks,
like REAIM and how we could bring an equity
lens to them.
So, again, these are great foundations for
the field.
So, Prajakta and I and in the group are really
going to be thinking, big picture.
How can we as an implementation science community
advance?
And I think really making more explicit, the
incorporation of health equity and contacts
into our work to address cancer prevention
and control.
And some of the things we've been talking
about, are, you know, what are the methods
and strategies I love the discussion today
about stakeholder engagement community, based
participatory research, and bi-directional
influence.
How can we think about bringing that into
our work?
What are the appropriate models and frameworks
we have so many?
Should we work on those existing ones and
adapt them?
Or do we need new ones with the focus on health
equity?
How can we best think about measurement that's
explicit about health equity?
And how can we better select implementation
strategies based on both context and equity
consideration?
So these are the things we're excited about,
and we're really excited about hearing from
you all as well about your ideas.
So enjoy lunch and look forward to seeing
you this afternoon.
[Event Concludes]
