JOANN STARKS: I want to welcome
everyone to today's session
on, "Testing the Waters
Before Diving In--
Determining the Type of
Knowledge Gap and the Readiness
of Knowledge to
Fill It," a webinar
for the Center on Knowledge
Translation for Disability
and Rehabilitation
Research, or KTDRR,
which is housed in the Austin,
Texas office of American
Institutes for Research, or AIR.
The Center on KTDRR is
supported through funding
by NIDILRR, the National
Institute on Disability
Independent Living and
Rehabilitation Research, which
is the center within the
Administration for Community
Living of the Department of
Health and Human Services.
My name is Joann Starks.
And now I will introduce
today's presenter, Dr. Travis
Sztainert, who is the
knowledge mobilization
specialist at Frayme--
that is spelled F-R-A-Y-M-E--
in Ottawa, Ontario, Canada.
Travis regularly consults with
decision-makers, regulators,
and other organizations
to foster collaboration
and provide the best available
evidence to support their work.
He leads the
development of products
from large-scale
evidence reviews
to brief reports
and data analyses,
to digital resources and
social network analysis
to demonstrate
wide-reaching impact.
He was a knowledge broker
content specialist at GREO
from 2015 to 2020.
Travis also works as
a research consultant
with various addiction
and mental health-related
organizations.
In addition, he has developed
and instructs the Certificate
in Knowledge Mobilization
through the University
of Guelph.
Travis holds a PhD in Psychology
from Carleton University, where
his interest in mobilizing
knowledge began,
and a Knowledge
Translation Professional
Certificate from SickKids
Learning Institute.
In this presentation, Travis
discusses the processes that
underlie knowledge translation.
He focuses on examining the
knowledge-to-action gap,
by introducing
overarching microgaps,
and proposes the creation
of a gap assessment tool.
He also presents an
end-of-grant readiness tool
he has been developing over
the past several years,
and discusses some
partnership activities
around the use of the tool.
This is a followup
to a presentation
that Travis provided a few
years ago for our sister KT
Center, the Center on Knowledge
Translation for Employment
Research.
Travis, are you ready to begin?
TRAVIS SZTAINERT: Yes, I am.
Thank you so much.
So I want to thank
everybody for coming
to watch my webinar today.
So I'm going to be
talking about testing
the waters before diving in.
So the idea here
is that you really
want to get your
feet wet before you
dive into knowledge translation,
knowledge mobilization,
whatever you call it.
And so this work
really began when
I was doing my PhD and postdoc
at Carleton University.
I started as a sort of associate
for the Carleton University
Gambling Labs Gambling
Research and Exchange
hub at Carleton University.
Then I was a knowledge
broker and content
specialist at GREO, which had a
knowledge translation mandate.
And now I'm a knowledge
mobilization specialist
at Frayme.
So I just wanted to thank all
three of these organizations
for basically giving
me the time and space
to work on this at this passion
project that it's become.
So when I first started hearing
about knowledge translation,
it took me a while to really
get a grasp on what it is.
So when I first started,
I did all the research.
I found that there is
more than 100 terms that
refer to all or a subcomponent
of knowledge translation
exchange.
You get things like knowledge
transfer, knowledge management,
diffusion of innovation,
broader impacts
and implementation science.
And when you start getting
down into the frameworks,
there's a plethora
of frameworks that
are looking at different aspects
of knowledge mobilization
translation, some based on
integrated models of knowledge
translation, some focused
more on end-of-grant knowledge
translation.
But what I really found is that,
despite the frameworks that I
was looking at, there
wasn't enough focus, for me,
on how to determine if your
knowledge is ready to be used,
and more importantly,
what type of gap
your knowledge is
filling in the landscape.
So I really needed
clarity to move forward
in my own work in
terms of determining
how and when to use research
to fill knowledge gaps.
So I sort of created
my own framework.
This was done way back
during my postdoc.
But it really-- it's called
The GREaT Flow Chart.
And you can see that there's
a start in the top upper left,
and a finish down in the bottom.
But the real takeaway
here is there's
three phases that I pulled out.
And the first is the
knowledge determinant phase.
And this is determining
what knowledge you have
and what gaps you're
looking to fill.
And that's where we're going
to focus our talk today.
Then there's the
knowledge planning phase,
planning on what
you're going to do,
how you're going to implement
the knowledge into practice.
And then the action phase
reaction, doing that work,
so that's the implementation
and evaluation piece.
So let's look at the first
knowledge determination.
That's what we're going
to talk about today.
So the first thing that you
hear about when determining
if your knowledge is ready to
be used and in what capacity
is to really identify the gap,
the knowledge-to-action gap.
And this is called different
things to different people.
Some people call it an
innovation-toimplementation
gap, or
discovery-to-applicability gap.
But the real question
is, what's in the middle?
How do you fill these gaps?
And I think there is an implicit
assumption here that knowledge
will fill the gap,
that giving people
the right information
or the right knowledge
to the right people at the
right time in the right format
will fill that
knowledge-to-action gap.
And I think maybe
knowledge-to-action gap
is a misnomer.
And my idea was that
maybe we can identify
specific micro-gaps
along the process that
could help inform
where, specifically,
that knowledge-to-action
gap lies.
So in the research, I
came off with really two
overarching gaps.
And the first is sort of an
attitude or value-to-action
gap.
And this really occurs when what
people say they value and want
doesn't equal what they do.
So there's a discrepancy
between what they say they want
and the behavior
that they employ.
So I might say, for
example, I value
eating healthy, healthy food.
But I regularly
visit McDonald's.
So there's the gap
between what I say I value
and what my behavior shows.
And here, people are generally
really bad at telling you
what they want.
So I might say I
want to eat healthy.
But when it comes time
to choose the food,
other things might
be more important--
so like the cost of the food,
the convenience, how ready
am I to really commit to the
healthy lifestyle and that sort
of thing.
So when it's actually
time to decide,
there can be a
real big disconnect
between what people say they
value and what people do.
And this gap is often addressed
using an information deficit
or rational choice
model of human behavior.
So often, people will say,
well, if you just give people
enough information,
if you just tell them
the consequences
of their actions,
you help them build
out a plan for dealing
with those actions--
so you tell them
that, for example,
that the fast food is unhealthy,
and give them the guidance
there--
then they'll do it.
Then they'll become more in
line between their values
and their behaviors.
But models based on
this information deficit
hypothesis really
fail to account
for cultural, institutional,
and structural constraints.
And actually, there's
a whole other webinar
that I've given that
you can look up,
called "An Introduction
to Nudge Theory."
So nudging in behavioral
economics really
are ways to change
behavior, without focusing
on the information in
that deficit hypothesis.
The other overarching
gap that I came across
was the intention-to-action gap.
And this is where
people fail to implement
what they're intending to do.
So a lot of psychological
theory states
that intention is
really one of the best
predictors of behavior.
And if you're doing a survey,
or if you've ever filled out
a survey after a webinar, one
of the things they might ask you
is, what's your intention to
actually use this new knowledge
that you've used in practice?
And so there's a
bunch of theories
that look at intention in the
theory of reason to action,
the theory of plan behavior,
the attitude behavior theory,
prospection-motivation theory.
But the problem here is that,
even among those theories--
so there was a meta-analysis
of meta-analysis conducted,
and found that intention
really only accounts
for 28% of the
variation in behavior.
And so they found that
intentions change over time.
Temporal stability doesn't
improve consistency
between intention and behavior.
So if you continue to say that
you're going to do something,
you're more likely to do it.
But really, it comes down
to the fact that intention,
even though it's the best
way to get to behavior,
it's still relatively a poor
predictor of actual behavior.
And so I want to really take
those two overarching gaps
and look at them more
closely, and see,
can I find micro-gaps
in why people might not
do a behavior within those
bigger overarching gaps?
And so I came up with some,
looking at the literature.
One of the first ones is that
communication-to-action gap.
So this is where somebody
might not follow through with
a behavior change
because it just wasn't--
they didn't have
great instructions.
There was poor communication.
There's poor directions
for how to change.
In order to get people
to change their behavior,
they need to know why,
where, what, when, who,
and specifically, how to
change their behavior.
So some of the reasons for
this gap that may exist
may include miscommunication
due to a lack of clarity
around the goal.
I don't know what
the end goal is.
What am I supposed to be doing?
Lack of clarity around
how to get to the goal--
OK, so I know what
I'm doing, but how
am I supposed to get there?
Or false communication or
other intentions-- and so this
can be lack of support.
This can happen sometimes
in change management, where
they say, oh, we're going
to be more inclusive
or have greater diversity
and accessibility.
But then there's sort of a
lack of follow-through, so
other intentions that
are coming through there.
And so there's a lack of clear
communication around the issue.
Another gap is the
motivation-to-action gap.
So even if you
communicate with me,
and I know how to do
something and when to do it,
but even knowing that,
even knowing how,
I might choose not to do it.
And I might lack the motivation
to really follow through
with what was told to me or
what I said I was going to do.
And the reasons here
might be numerous.
People may not buy
into what we're
trying to suggest that they do.
There might be a lack of belief.
It doesn't make sense,
lack of clarity.
That goes back to the
miscommunication piece.
People can be very anxious
and concerned about changes.
There can be a lack of focus.
They're just not interested in
making the effort to change.
It seems like it's
too effortful,
and I don't feel like doing it.
Or they're lacking a
big picture to guide
them-- so lack of
destination, lack of end goal.
And there are some
arguments here that,
is this really our problem?
Should we have to address
motivation-to-action gaps
in knowledge translation?
And here, some people
put the argument,
don't people bring their
own motivations with them?
And I would say no.
I would say, there's a lot we
can do to help motivate people.
And if knowledge translation,
knowledge mobilization
has taught us anything is that
emotions and motivation really
do matter in helping
people make a change.
OK, so even if you communicated
clearly and are motivated to do
something-- so I understand
the knowledge, I appreciate it,
I believe it, I want
to change my behavior--
I may actually lack the
experience to do so.
So there might be a skill gap
there, a skill-to-action gap.
So this is a great example
that I came across the book.
But it's basically, if I'm
going to hike the Appalachian
Trail, the only
thing that's going
to get me ready to do
that is a lot of practice
and conditioning, like doing
multiple smaller hikes.
Like even if I get the best
gear, that won't really
help me.
If I have the best supports,
if I have the best technology,
that won't help me do the trail.
Understanding the starting
point and the ending point
and knowing how I'm going
to take on the trail,
and having a destination
and a map and guidance,
again, that won't really
help me when I'm actually
walking on the trail.
It might help a little
bit, but it's not
going to carry me the
whole way through.
Route planning won't help.
Maybe spending some time on an
elliptical and a stair climber
will help.
But probably not.
Probably what I really need
to get me ready to hike
this trail is actual
experience hiking.
And so sometimes, it
comes down to that.
If the gap you're trying
to fill comes down
to a skills-to-action
gap, you need
to give people the
chance to really try out
in safe, smaller ways what
you're asking me to do,
especially if it's a big ask.
And then there might be
a habit-to-action gap.
So even if I have the
skills, if what you're
asking me to do to
change my behavior
is going against an
already preformed habit,
that's going to be a problem.
So we all know creating
and maintaining new habits
can be really difficult.
This is called the New Year's
resolution effect.
Everybody tries to make a
resolution on New Year's, and
a very small minority actually
follow through with that.
And a lot of that has
to come down to habit.
So unlearning and
delimitation is a new concept
in implementation science
and knowledge mobilization,
more generally.
And here, as the saying
goes, old habits die hard.
So if our brain and our body
is used to doing something,
there's automatic processes,
those are very hard to unlearn.
So the idea is that teaching
an old dog new tricks is
a relatively easy thing to do.
The hard part is
unteaching an old dog
a trick it already knows.
And so this is something
that's currently being looked
to in implementation science.
There's this idea of changing
habits through substitution.
There's a great book
called The Power of Habit.
And if you're dealing with
a habit-to-action gap,
if that's what's stopping
people from doing the behavior
or taking on your
knowledge, then I highly
suggest reading that book.
And it requires conscious
effort to do something new.
So making new behaviors as easy
as possible will go a long way.
One of the last gaps might be
that the environment really
isn't set up for success.
So the environment-to-action
gap has a lot
to do with change management
approaches, wherein, maybe
there's not greater support in
the environment for a changed
behavior.
So maybe the organization
doesn't really
support the change.
Maybe there's not enough
materials or reference aids
to support a person changing
within their environment.
There's the question,
are people being
rewarded for making the change
or are they being dissuaded?
So is it taking more time
for them to make this change
and they're not being
compensated for that time?
And is the change being
reinforced over time?
And these are all
things that you
need to think about if there is
an environment-to-action gap.
So the way I pictured
it is that, in terms
of the knowledge-to-action
gap, really,
there's these two
overarching gaps,
of an attitude,
value-to-action gap
and intention-to-action gap.
And through those are
possible smaller micro-gaps.
So there might be a
communication gap.
There might be a
gap in motivation.
There might be a gap in skills.
There might be a habit gap.
Or there might be
an environment gap.
But this is nothing new.
So if you really look at
the literature, when people
are looking at the attitude
or value-to-action gap,
there are existing
frameworks and models
that can help examine the
attitude, value-to-action gap,
including the theory of planned
behavior, social cognitive
theory, health belief model,
and stages of change model.
And likewise with the
intention-to-action gap-- so
there's Maslow's
hierarchy of needs,
the hierarchy of the four
sources of motivation,
and Arnold's appraisal
theory of emotion.
What doesn't seem to exist
yet is a tool or checklist
to help us really identify what
is the micro-gap that we're
facing in our work.
And so I propose that the
knowledge mobilization sector--
and maybe you're interested
in this, as well.
And if so, connect with me.
But maybe we should
come up with a tool
that, really, at the
beginning of the process,
really looks at, what
type of gap is this.
And then by determining
what type of gap
it is, whether, it be knowledge,
communication, motivation,
skill, then we can come up with
the appropriate initiatives
and solutions to
help fill that gap.
So this is just rough.
But if it was a knowledge
gap, we might say,
do individuals
realize that there
is a way to fill the
knowledge-to-action gap?
If not, well then, maybe
it's a knowledge gap.
Do they know that
the knowledge exists?
And can they find the knowledge?
And if not, then it's
knowledge or communication.
Are the goals clearly
communicated and understood?
Well, if not, then it's
clearly a communication gap.
In terms of motivation,
are individuals
resistant to changing course?
Is there any apathy
towards the change?
If so, then it's possibly
a motivation-type gap.
For skill, you need
to ask yourself,
is it reasonable to
think that somebody
can fill this
knowledge-to-action gap
without practice?
What will they need to practice?
Where are the opportunities
for them to practice?
In terms of habits, you need to
ask, are the required behaviors
that I want the individual
to do with this knowledge,
are they habits?
And if so, are there
any existing habits
that need to be unlearned?
And then finally,
in the environment,
are there factors in the
environment preventing
the individual from being
successful in their change?
What do individuals need
from the environment
to make them successful?
And so this is
just some questions
that I threw together.
But again, it is my
hope that one day we
can make a gap assessment tool.
And if this is interesting to
you, please reach out to me.
I'll have my email at
the end of the slides.
OK, so that was the first piece
that I was really interested
in, was in terms of the
knowledge determination phases,
I call it.
It was really determining what
type of gap we're dealing with.
Now I want to look a little
bit closer at that knowledge
determination phase.
So this is a zoomed-in
picture of the flow chart
that I showed earlier,
just looking at the start,
and looking at some of
the steps in the process.
So we already dealt
with the type of gap.
So have you identified a
potential problem or issue
to be dealt with?
That seems like it's manageable.
Even if you don't have
the gap assessment tool,
I think most people can identify
some place in which knowledge,
in some way, shape, or
form, can help bridge
the divide between
good practice and then
evidence-informed practice.
Do you possess the knowledge
that you want translated?
Again, most people will
have this knowledge on hand.
And if not, they can
get it, whether that
be through systematic reviews,
whether that be through lived
experience, local
community data.
Most people have some idea of--
they have the knowledge
to fill the gap
that they've identified.
But they need to ask
yourself, is the knowledge
ready to be used?
So if you've identified a
potential problem or issue,
and you have knowledge
that you want translated,
well, is that knowledge ready to
be, then, used to fill the gap.
Because knowledge exists on
a continuum of readiness.
And in my research, I haven't
found any sort of tool
that really looks at this
piece about assessing
whether the knowledge
is ready to be used
and in what capacity.
And so that's been really the
driving force for this tool
that I'm about to talk about.
So when you're
thinking about how
ready knowledge is
to be used, there's
these two quotes that come up.
They're from the
"Knowledge Translation
in Health Care" book, "Moving
From Evidence to Practice."
And they say, when considering
the end-of-grant KT activities,
it's critical to consider
the strength of the evidence
and its significance and tailor
our strategies as appropriate.
So here, it's just
that-- not all knowledge
is born equal because it's
on this continuum of use.
So even if knowledge exists,
it may not be right for you.
So decisions about the extent
and ambitiousness of KT plans
should be guided by the
reliability, validity,
strength, and significance
of the research findings.
And so that's what I really
wanted to flip into a tool.
So I came across, really,
three overarching criteria
for what we might want to
look at when considering how
ready evidence is to be used.
The first is, is your evidence,
or is the knowledge that you
have in hand, is it couched
within a larger body
of work and existing
within a solid foundation
of valid, high-quality
theory and research?
So the idea here
is that you don't
want to just put
excessive emphasis
on a single small
study or studies
of poor methodological
quality, or studies
in which the strength
of the evidence is low.
So this can help fight against
cherry-picking of data.
So you might hear that
there's an issue that
arises in the news.
And some media person comes
on and says, well, look,
there's this great
research that's
been done that
addresses this issue.
Why hasn't it been implemented?
We need to move forward
on this research
to help address the issue.
And yeah, but it's not
of a great quality,
and it's a single small study
with poor methodological
considerations, then
it might actually
be unethical to move
on that right away.
Because then you might
be implementing something
that maybe wasn't
tested as thoroughly
as it should, or
could even impact--
could be harmful to people.
So it's important that the
knowledge is of high quality.
Of course, then the question
is, what is knowledge?
So there's the idea of the
rigor of the knowledge,
like the methodological
quality of the knowledge
versus the relevance, how
relevant is it to actually
addressing an issue.
And then you get into
the weeds of research
versus practice-based evidence.
So when does evidence
from the front lines
maybe outweigh
research evidence.
And what happens
if they collide?
And so that's all to
say that, in this tool,
for the sake of
simplicity, I really
did focus on research-based
empirical-type evidence.
And so this tool
is really tailored
towards assessing the readiness
of empirical research-based
evidence from that lens.
So that's not to negate
the importance of relevance
and practice-based evidence and
evidence from lived experience,
but just to say that I
couldn't incorporate everything
into this tool.
And so if that's
what matters to you,
then this tool really isn't
designed with you in mind.
And some authors argue
that knowledge synthesis,
specifically systematic
reviews or meta-analysis,
should be considered the base
unit of knowledge translation.
To some extent, I
agree with that.
But I also have
some issues with it.
So systematic reviews,
if you've ever
read one, or a
meta-analysis, often,
the focus is so narrow in terms
of which studies they include
and what studies they
exclude in their synthesis.
And they're always
looking for populations
that are homogeneous,
so that are the same.
And so the results from
those systematic reviews
and meta-analysis are often
not very widely applicable,
and in some cases
might be meaningless.
To the credit of academia,
this is part of the reason
that people are moving to what's
called realist reviews, where
they look at not
in what populations
this effect might
occur, but why,
what are the reasons for
success or failure here.
But that's all to
say that, I slightly
disagree that knowledge
synthesis should be the base
unit.
But it is good if it is a
base unit for your knowledge.
Another thing to look
at is, is the evidence
relevant or appropriate for
the targeted domain of use?
So if whatever knowledge or
evidence you have in hand
is going to be of
major significance
to the issues at hand,
it's going to really impact
the users of that
knowledge, then it
should be placed at
higher consideration.
So evidence should be locally
relevant and adaptable
to its target domain of use.
And then lastly, you
need to think about,
will the evidence have
a significant impact
on the knowledge
users or the system?
So this gets into the
weeds a little bit
of the ethics of knowledge
translation, knowledge
mobilization.
But that is to say, if you
choose not to move forward
with a piece of
evidence or knowledge,
that could greatly
impact or influence
people's quality of
life or health outcomes,
there's an opportunity
cost of not moving forward
quickly enough.
So a great example of this
is with the story of ulcers.
So it was 10 years between when
simple antibiotics were found
to be a cure for ulcers
and when it was actually
implemented as an
evidence-informed strategy.
And so in those 10 years, ulcers
are debilitating and common.
So there was a lot of
people that, I would argue,
were probably needlessly
suffering from ulcers.
So the question would
be, should that knowledge
have been implemented and
mobilized and translated
quicker than it was?
And so that's just
something to consider.
So in terms of the
actual tool, the tool
is divided into two sections.
The first section looks at
the quality and strength
of the evidence.
And the second section looks at
the strength of the evidence.
This tool, again,
is designed to be
used by anybody
who wants to assess
the KT readiness of completed
or near-completed research.
So it's really focused on
end-of-grant KT activities.
And as I mentioned before,
the current checklist
deals more with
empirical evidence--
so it's from a health and
social science perspective--
than it does from
other perspectives.
So you'll see in a
second that there
are some initial considerations
I use with the tool that
are based on the
evidence pyramid,
in terms of empirical evidence.
And so this first
section can be adapted
to meet the needs
of your organization
and whatever you feel is valid
evidence about where you work.
The important thing about
evidence is it's never--
it's always defined
by the users.
So the users of the
evidence and the ones that
are going to be impacted
by it are the ultimate ones
that determine what type
of evidence is meaningful.
And so this is just,
again, with my tool,
my interpretation of
what's meaningful.
So I'm going to show you first
the blueprint of the tool.
I'll show you the
behind-the-scenes of the tool.
And then I'll quickly go
over where it's going.
So this is the tool
in all its ugliness.
This is the very
first iteration.
I go through this in a lot more
detail in the previous webinar
that I did for the
other organization,
the sister organization
that was mentioned.
I encourage anybody that's
really interested in finding
out more about my
thought process
throughout this tool to go
and watch that first webinar.
Because it goes into a
lot more detail than I'll
be going in today.
But I just wanted to
give a general overview
of the end-of-grant
readiness tool.
So you'll see that,
in the three colors,
there's really three sections.
The first one in the green is
really an initial consideration
section.
The second section looks
at the quality and strength
of evidence.
And the third section looks at
the significance of evidence.
You'll see that, on
the very right side,
there are points awarded to
the answers to such questions.
And those points,
ultimately, add up
and sum to a score and it.
And depending on
that score, the tool
will suggest one of
three possible outcomes,
either that the evidence
really isn't that ready,
so there's low readiness
to mobilize that knowledge
or translate that knowledge,
moderate readiness,
and then high readiness.
And we'll go through those
a little bit at the end.
But let's just start
at the beginning.
So again, the initial
consideration, as I said,
is based on the
evidence pyramid.
So here, if you're starting
off with a meta-analysis
or knowledge synthesis
of some sort,
you're starting with
a lot more points
than if it's just a single
observational study.
Now, that's not to say,
at the end of the tool,
that the observational study
won't have moderate or even
high readiness to
translate at the end,
depending on the rest of
the outcomes in the tool.
This just gives an
initial weighting.
Because in some
senses, it can give you
more confidence in the
findings of the evidence
and the knowledge
that you are using.
So you start off with 10.
You can go down.
Observation, we
start off with 1.
You can go up.
There are no
negative points here.
It's a 10 to 1 scale.
And then the first
thing you need
to think about is, is
this empirical evidence,
is the knowledge that
you're dealing with,
is it of high quality,
methodological or otherwise?
And there are a lot
of different tools
that you can use to appraise
methodological quality.
It's also called risk of bias.
You'll hear these tools
called risk of bias tools.
And you just Google search them.
There's a ton.
And so I didn't want to
specify which tools to use.
All that to say,
whatever tool you use,
just transform it into
plus 10 or negative 10.
And then, at the end
of the day, you'll
be able to use this checklist.
And so if it's of
really high quality,
you can get up to 10 points.
And if it's of
really low quality
and it's not very well
done methodologically,
you can lose up to 10 points.
So right off the bat, if it's
a very poor meta-analysis,
it's going down to
1 point or 0 points.
And if you're starting
with a really solid
observational study, you're
10 points right there.
So the next question
is, is the evidence
in line with the existing
body of knowledge
or cached within
existing literature?
Again, I talked
about this briefly.
You can have, yes,
up to 5 points,
or no, subtract 5 points.
What is the estimated
effect of the outcome?
So this talks
about effect sizes.
The reason for this question,
and the next question
about power analysis,
is due to something
in the social sciences
called p-hacking.
So there's been a lot of
concern within psychology
and the greater social
sciences that a lot of emphasis
has been put on
statistical significance
and not a lot has been done on
effect sizes and replicability.
So the problem is
here, in any study,
if you have a big enough sample
size, if you have thousands
upon thousands of
participants, it's
very easy to find
statistical significance.
But what it's hard to find
is meaningful differences.
And those are effect sizes.
What is the actual effect
that you're looking at?
And so here, I
wanted to give points
for larger effect sizes
and adequate sample sizes.
And this helps fight
against the fact
that some studies might come
out with high significance,
but it's relatively
meaningless because the effect
size is so low.
So you don't need to get too
bogged down in that at all.
But I do have some links within
the tool to help navigate that.
And then the last question
here is, is the evidence
ecologically valid?
So in other way of saying that,
is the evidence generalizable?
Did they just look
at, say, a population
of students to
find this evidence?
And if they looked at
students in university,
is that generalizable
to lower income,
ethnically diverse
populations in rural Alaska,
if that's where the gap exists?
So you need to think, is
whatever evidence comes out
of your knowledge,
is it generalizable?
Is it more ecologically
valid to the real world
and where you want to
implement your knowledge?
The next section is the
significance of evidence.
So this doesn't deal-- this is
a little bit more subjective.
So whereas the
first one, they're
scales to tell you about
mythological quality,
although I argue
that some of those
are a little biased and
subjective, as well,
power analysis, and even the
ecologically valid question,
somewhat subjective.
So it can take this tool
sort of with a grain of salt,
in the fact that a lot of these
questions might be subjective,
and different people would
rate them differently.
But at the end of
the day, this tool
is just designed to give
you a quick snapshot
of your research.
It's not designed, and it
shouldn't take a long time
to really fill out.
But it's really designed
to get you thinking about,
how ready is your
evidence to translated?
So this next section,
significance of the evidence,
starts with a note
that you maybe
consult your stakeholders
or knowledge users
to help answer these questions.
But the first question is,
for example, does the evidence
fill a knowledge
user gap or need?
Will this knowledge
really fill their gap
or help address the issues
that they're facing?
And yes, if it's determined
via specific request,
you get plus 15.
Does it determine if an
area needs assessment
or a formal consultation?
Plus 8.
Local opinion, plus 6.
And no negative 15.
So the point here being that,
if your knowledge is not
going to fill a
knowledge gap or need,
then it's really not
ready to be translated.
So you're losing 15
points off the bat.
Because if there's no need or
appetite for your knowledge,
then no one's going to
pay attention to it.
You could have a very
well-designed meta-analysis.
But if you're trying
to get it into practice
and it doesn't fill a need
or want at the knowledge user
level, then you're going to
be fighting an uphill battle.
And it's maybe not ready to
be translated in the same way
that something would be
that would fill the need.
The next question
is, can the evidence
be applied to the
target population?
Again, yes, maybe, and no.
So again, this kind of goes
to ecological validity,
as well, but more generally.
Does the evidence directly
address the desired change
of beliefs, attitudes, behavior?
So let's say, for
example, a psychology
study was looking at attitudes.
Well, we know from
my gap analysis
that I just talked about
that attitudes don't actually
predict behavior.
So if what you're trying
to do is change behavior,
but the evidence was just
looking at attitudes,
then maybe the evidence
doesn't directly
address the desired change,
for example, in behavior.
And so here, you could score
a plus 5 to negative 5.
And then lastly,
does the evidence
provide a new, novel,
or innovative way
to address the desired change?
If not, you don't
lose any points.
But if so, you get some points.
This is probably one of the most
arguable pieces of this tool,
about whether that
should be a criteria.
This comes down to the
fact that you can't get out
of an issue or problem with
the same thought processes
and thinking that got
you into that problem.
And so if you can find a
new, novel, or innovative way
to address a desired
change, then it
actually might be more
successful and have more merit,
and you should put some more
efforts, in terms of KT,
into it.
But again, that could change.
So that's the ugly blueprint of
the tool with all the scoring
criteria.
And then, at the
end of the day, you
have these readiness outcomes.
And they're based on--
they're suggested
cutoffs for each.
But again, that needs
to be pilot tested here,
and we'll get to that.
But the readiness outcomes
are low readiness--
so if you score low readiness,
that suggests that maybe more
research is needed.
But passive
dissemination practices
here would still count.
So I don't want to ever say
that a piece of knowledge
or evidence should never
be translated, but maybe
just with caution--
so presented at, for example,
a conference or a journal
article, where other
peers could read about it,
so more of these passive
dissemination type of pieces.
If it scores as
moderate readiness, then
active dissemination, more
targeted dissemination
practices.
And then if it scores
higher readiness
to translate or
mobilize, then what
you might be looking at there
is actual implementation
readiness.
So we'll go through
each of those.
So the first is low
readiness to translate.
So again, the evidence
doesn't really
seem to be ready
to be translated.
More high quality, higher
significant research
needs to be conducted.
Passive dissemination
or diffusion strategies
may be appropriate.
Stakeholders here
should be consulted
to make sure that
future research will
be of value and significant
to the end users.
Some examples of a low
readiness to translate,
I gave you again, presentations
at academic conferences,
sharing knowledge on
research-centered media, maybe
some focus groups with
knowledge users and stakeholders
that are facilitated
to determining
their pressing and
upcoming issues,
and how that knowledge might
actually fit or address
some issues that you didn't
know about, and that might be up
and coming.
So I never want to that evidence
shouldn't be translated,
just with caution here.
If you're looking at moderate
readiness to translate,
well, then you might be ready
for more active approaches
to dissemination.
So targeting specific audiences
other than researchers
might be useful to
get your message out,
to get them knowledgeable
and potentially
thinking about your evidence
and using the knowledge
you're trying to share.
So this quote, again, comes
from the Knowledge Translation
in Health Care--
Moving from Evidence
to Practice book.
It says, "Active approaches may
include tailoring the message
and meeting to the
specific audience,
linking researchers and
knowledge users through linkage
and exchange mechanisms, such
as small workshops focused
on the dissemination of a
synthesized body of knowledge
or those focused on
developing a user driven
dissemination strategy, engaging
media, using knowledge brokers,
or creating networks
of committees
of practice involving both
research and knowledge users."
So the idea here is
that it's a little bit
more active, a little bit
more intentional, if you will,
than the passive strategies
for low readiness to translate.
And then finally
here at the end,
you get high readiness
to translate.
So here, in order to score
high readiness to translate,
you need to have
done really well
with this piece of evidence
on the quality piece
and the significance piece.
So if you're scoring high
readiness to translate,
that means you started
with a good study,
it was a good study, and it's
very relevant to the end users.
And so you're very ready to
translate that knowledge,
because whatever
effort you're going
to put into it is
going to be relevant
and you can stand behind it.
So you need to do
well in both sections
of the tool, that is to say.
So you'd come to high
readiness to translate.
And here, the evidence
may be highly useful,
and therefore should
go beyond regular means
of just dissemination.
You might want to consider
ways of putting your knowledge
or evidence into practice.
You need to decide if you
want to use your knowledge
or evidence to change
attitudes, behaviors,
influence decision-making,
influence intentions.
You really need to think
about what your goals will
be for implementing
that knowledge
and come up with tailored
and structured knowledge
translation plans in order to
do that and meet those goals.
So an example would
be, you may want
to begin with a
small-scale project
with a target population
in a local setting.
And then make sure to secure the
early involvement of knowledge
users and stakeholders
in that pilot.
And then if it goes
well, scale from there.
OK, so I brought you through
the thinking of the tool.
What's next for it?
So this has been an ongoing
process, like I said,
since my postdoc.
But really, what has
happened in the meantime
is that, I've gotten
busy with work and jobs.
And it's sat a little
bit on the wayside.
I have created this
beautiful Excel version,
with help from
Joe Grady at GREO,
who is a summer student with us.
And he helped put together this
beautiful template that does
the automatic scoring for you.
It's a dropdown menu, and points
are automatically calculated.
So it's a little less ugly,
a little more beautified,
if you would, than what
was previously done.
And I'm happy to announce that,
based on the presentations
that I give on the
tool years ago,
that some people have come
forward with interest.
So one the people that is
really interested in this work
is Dr. Belinda Goodenough
from the Dementia Training
Australia.
And she really wanted to
create a Readiness for KT Tool.
She's calling it
the R4KT protocol.
She approached me
about using my tool.
And I gave her some
links to some other tools
that since then have
done similar things.
Like this is called the hexagon
tool, which is very similar.
It looks at the
context around evidence
to determine its readiness.
And so she took parts of my
tool, parts of other tools,
and really combined them in
this Readiness for KT Tool.
And she presented the initial
findings at the 2019 Australia
Dementia Forum.
And the idea is, eventually,
to publish this tool
online as part of a web
portal for Dementia Training
Australia.
And part of the reason
that she's looking at this
is that they've had a
lot of research that
has been done in the past.
And the question now,
of course, is, well,
what has been done
with that research?
And what can be done and should
be done with that research?
And so the idea is
to use this tool
to help inform that, as well.
Besides that, I think
that this tool could also
be useful maybe as a
preconsideration for people
undergoing research.
So thinking about how to do
significant and high quality
research at the very
beginning, and going
through some of the
questions on my tool actually
might be helpful for people
at the initial stages
of their research, to help
ensure that, down the line,
that it can be used,
especially if they're not
taking an integrated approach,
especially if they're
taking an end-of-grant
type of approach.
So if the knowledge
translation is
going to come
after the findings,
to just try to consider
some of these questions
that I posed and some
of the scoring criteria
upfront I think could be useful.
And so with that, I'd
like to thank you.
I'm always interested
in collaborations
and working with people on these
tools that I'm coming up with.
And so if you're
interestined in any of them,
please, please,
please, don't be shy.
Please contact me.
My email is
travis.szt@gmail.com.
Dot Or you can reach
out to me on Twitter.
My Twitter address is @DrSzt.
So I want to thank you, again,
for listening to my ramblings
and philosophy on knowledge
translation and mobilization.
Thank you.
JOANN STARKS: Well, thank
you very much, Travis.
That was really great.
We really appreciate you sharing
this information with us.
I think you did a great job.
But we do have a
couple of questions.
And one is, how do you determine
the scores for each question
or construct?
And as part of that,
why did you decide
to include negative scores
in your scoring system?
TRAVIS SZTAINERT: Right.
Yeah, so a lot of the
thinking, initially,
was guided through the research
I did, looking at the question
and trying to find
out, are there
any tools available for
this readiness to change?
But ultimately, it came down
to a decision of face validity
a.k.a. what looks good and how
I imagined the tool being used.
And so through pilot
testing and through the work
with Belinda Goodenough, she's
working on her own scoring
for the tool.
And we're hopeful that
through that pilot testing
and through seeing how the
tool works that we can better
refine those scoring criteria.
But yeah, again, I designed,
also, the tool and the scoring
to equalize out.
So in order to get
the highest readiness
to translate
category, you really
have to have high
quality research that's
done methodologically well
with high quality that is
significant to the end users.
And so the scoring is
sort of balanced out such
that, again, you could
start with the lowest,
you could start with
an observational study.
And you could get high
readiness to translate
if both criteria are met,
the overarching criteria.
Or you could start with a
meta-analysis that actually
is not very much ready
to translate based
on the scoring-- so
enough flexibility
to go both positive
and negative.
Now at the end of the day,
given all the scoring,
it's very rare
that anybody would
be in the actual negatives,
like any piece of one research
would be in the negatives.
Part of that is because I
don't personally feel that--
I feel that any research,
even the most basic research,
can be translated, can have some
knowledge translation some way.
So I didn't want to give
people negative scores
and for them to think that
they're out their research
isn't positive.
I think research
is always positive
and you can always
learn something from it
and help others
learn by sharing it.
So that was some of the
thinking around the scoring
and the positive and
negative numbers.
So again, yeah, I wanted
offset between the different
categories, so that you could
gain points or lose points.
JOANN STARKS: OK, thank you.
Regarding the quality
and strength of evidence,
I was wondering, do you
have some accompanying text
that maybe gives a little
bit of an explanation of some
of these things, such as
effect size, sample size, power
analysis?
Or is the assumption that,
if you're doing this,
you already have
that information,
or that you can
get it elsewhere?
TRAVIS SZTAINERT: Yeah,
that's a great question.
Thanks.
And I think the idea is to
have a guide to go with this,
and to really take you through
the different considerations
of each question.
I think it's great
if you already
have that knowledge with you.
And there are, as I
mentioned, some great tools
that can take the average
person through scoring
with the methodological
quality of a research evidence.
The Critical Skills
Appraisal Program checklists
come to mind.
They're really
user-friendly and really
help you determine
methodological quality.
Having said that,
in some capacity,
I think it would be
easier to use this tool
if you had some background
knowledge in research.
So community
organizations that are
wanting to use a
tool, where maybe they
don't have that sort
of expertise on panel,
I would recommend, at
least at this point,
to maybe work with a
researcher or scientist,
to help you answer
those questions.
Because a researcher,
even graduate students
within the research
institutions,
would be able to probably
fill that section out
fairly quickly.
So I don't want it to be too
onerous on people to fill out.
But I think having a user
guide would be really great.
And if you're
interested in that,
again, give me a shout
on email or Twitter.
JOANN STARKS:
Before we go, I want
to ask viewers to complete
the brief evaluation
for this webcast by clicking on
the link in the description box
below.
We do appreciate your
feedback very much.
I want to, again,
thank Dr. Travis
Sztainert for his
presentation today,
and to thank everyone
for participating.
I also want to be sure and thank
Shoshana Rabinovsky for helping
with logistics, and
NIDILRR for supporting
webcasts and other activities
of the Center on KTDRR.
Please visit our website
at www.ktdrr.org.
I hope you all have a
very good afternoon,
and we will see you at
our next kick KTDRR event.
