>> Please note this session is being recorded.
If you have any objections, you may disconnect
at this time. Thank you very much on half
of National Cancer Institute. I wish to welcome
everyone to the March Advanced Topics in Implementation
Science webinar. [Inaudible] be joined fireside
by Drs. Gary Bennett and Patricia Arean. They,
of course, will be joined by our own Dr. David
Chambers to moderate the session. A brief
word about logistics and we'll be off. We
ask that if you are not already on mute, please
keep your phone on mute for the duration of
today's presentation. As mentioned, the session
is being recorded, and muting all lines will
help us to avoid any background noise. We
encourage questions. They can be submitted
by using the Q&A feature on the right hand
side of your screen. Type your question in
the provided Q&A field and hit submit. Feel
free to submit your questions at any time,
and we'll opening the session for questions
once discussion is closed. And with that I'll
turn it over to David.
>> Okay. Thank you, Sarah [assumed spelling].
And thanks to everyone for joining us. This
is another one in our set of fireside chats.
And again, for those of you who haven't been
with us the last few months, as opposed to
some of our other advanced topic webinars
where we've enabled presenters to walk through
PowerPoint presentations around particular
areas of interest and implementation science,
over the last few months we've really tried
to think about other ways to engage our community
around thinking, you know, where have we been
in implementation science and where are we
going. And this, this one is, what I see,
as a wonderful opportunity to hear from two
experts who've really begin thinking about
how health information technology and implementation
science go hand in hand. So both Gary and
Pat are favorites of ours and incredibly talented
through their careers. Gary has crossed a
whole range of different barriers in thinking
about health information technology and thinking
about technology from the, the sort of start-up
perspective, from thinking about health interventions
research in his faculty role at Duke, and
then really thinking about how can we use
the technologies that we're carrying around
with us on a daily basis to better engage
people, particularly those who are traditionally
underserved by our health services, to get
as good, as good care, as good advice, as
good support in, in, in advancing their own
health. So it's great to have Gary. Gary I
remember introducing at the fifth annual D
& I Science Meeting as a rock star, which
he truly is. And alongside him is another
rock star, Pat Arean. Pat I got to know during
my years with the National Institute of Mental
Health. She recently completed a stint on
the NIMH Advisory Council where she was a
voice for trying to encourage the institute
to think much more broadly about opportunities
for health information technology to advance
every aspect of people's lives in dealing
with mental disorders. Her work has been instrumental
in getting us to think differently about care
management, think different about assessment
of people with mental disorders and how to
provide effective care, and it is so great
to have both of them sharing their expertise
and their experience in this manner. So the
way, as Sarah had said, this is going to work
is we'll have a number of questions or a prompt
to give a chance to hear from both Gary and
Pat. We'll hope that you will, as questions
occur to you, join in the conversation by
typing them into the Q&A. And so we'll be
as flexible as possible. And really just have
a chance to probe our experts for their thoughts
of where we are and where we should go. So
with that, let me just turn to an initial
question, maybe first for Gary and then for
Pat to follow up. Just curious about how,
how did you, each of you first get interested
in technology as a medium through which to
improve health?
>> Well, good afternoon, David. And thanks
for the really kind introduction. You know,
I have the distinction, I think, of having
been a nerd for most of my life. And I've
been tooling around on the Internet since
I was a pretty young kid. You know, at the
time when I first started, you know, you connected
to the Internet through bulletin board services
and things like that. And so I've been playing
around for a long time, and I've been coding
for a long time. I just haven't ever gotten
very good at it. So I've sort of kept these
as parallel existences, so working on the
Internet and coding, you know, during, at
night, and during the day I was trying to
advance a research career in health intervention
research. And I got very interested in the
combination of the two strategies when I started
to see in my, in the early research that we
were doing some evidence of folks in medically
vulnerable communities really being early
adopters of very early mobile devices. At
the time they were, these were very inexpensive
cellphones, but they allowed us to do some
interesting things with them. And I started
to see them in the low income populations
in which we were doing our work. And that,
combined with some early evidence from colleagues,
suggested that marrying my two, two of my
loves, technology and intervention science,
at the time, might be a way for us to try
to reach out and to engage the populations
that we in the scientific community had labeled
as hard to reach. And so for the last almost
15 years now I've been doing work of that
type.
>> Awesome. Pat?
>> Yes. Hi. Welcome, everybody, and also,
David, thank for the nice introduction. So
I'm a little older, I think, than Gary. I,
my first foray into technology and health,
I also have always been an early adopter geek.
I had one of the first Palm Pilots of anybody
who remembers what that is. But when I was
a graduate student, my first foray into technology
and health was on a Robert Wood Johnson Foundation
funded grant to see whether or not biofeedback
technology, in combination with physical therapy,
could help older African American men who'd
suffered strokes and had not regained the
use of their affected limbs after a year of
physical therapy. And we had the unique distinction
of being the very first people in the hospital
I was working at, which was Goldwater Memorial
Hospital, for having the very first color
computer monitor, which was limited to three
primary colors. And the feedback was basically
these bar graphs as people were trying to
make a fist, and using that feedback to, you
know, improve their ability to engage in simple
motor tasks. But really where I started to
dive in was, the National Institute of Mental
Health for a while had this mechanism whereby
academic researchers could partner with, you
know, counties or community mental health
centers to create innovations in research
to address implementation and quality issues.
And I had one of these mechanisms, and when
the ARRA supplements were released, we had
applied for a supplement to our center to
create an electronic health record for the
agency that could help clinicians make decisions
based on questions that they would ask patients.
And so helping them, using technology to help
them make clinical decisions around what kind
of intervention they should provide, what
kind of, you know, literacy materials should
they give their patients. And so that was
really, kind of, at the time, an innovative
and exciting opportunity. And we decide to
use an -- and iPads had just come out. So
we were using this -- actually, eventually
popped this technology onto iPads that clinicians
carry around the clinic. At the same time,
we also applied for an ARRA-fund pilot project
through the National Library of Medicine to
create a Smart Note to further support clinicians
in their decision making working with older
patients with depression and when they were
using problem solving treatment. So the Smart
Note basically utilized previous clinical
records that we had as well as clinical outcomes
to, and we were, this was back, really early
in the day where we were using machine learning
technologies to come up with algorithms of
where we could predict how people were going
to do at a certain point based on notes that
clinicians were writing in the records. So
a lot of the early work I did with technology
was around supporting clinicians in decision
making and enhancing their quality of care.
>> Cool. And at what point did you, did you
sort of turn your attention to dissemination
and implementation as a research goal? And
maybe, Pat, do you want to start up?
>> In general or just in terms of technology?
>> Well, how about both?
>> Okay. Well, I mean, I've been interested
in access to care for, high quality care for
a very long time. And as an intern at Bellevue
Memorial Hospital, at Bellevue Hospital, I
had the opportunity be in, you know, work
in what I saw was one of the very early versions
of integrated care where in the geriatric
outpatient clinic at the time, this is in
1989, the mental health clinic and the primary
care clinic were pretty much integrated. They
were only divided by a, the waiting room.
So the mental health clinic was on one side
of the waiting room and the primary care clinic
was on the other side and had the luxury of
being able to have a primary care physician
do, like, a mini mental status exam or, you
know, you know, a depression screen and just
come over to me and say I'm going to bring
this patient to you. Would you mind talking
to her? Lost her husband, et cetera, et cetera.
So I've always been fascinated with, and I've
always worked with populations who historically
don't use traditional mental health services.
So this idea of integrating, you know, mental
health services into non-traditional mental
health settings has always been intriguing
for me. I got really interested in the way
that technology could facilitate really more
implementation from the perspective of, like
I said earlier, quality of care. Like, can
we support clinicians in making decision making,
but then also to simply getting behavioral
interventions out there into the hands of
the community. So as Gary had pointed out,
the, you know, ubiquity of mobile devices
has really just been incredible across, you
know, socioeconomic status, across, you know,
ethnic minority communities. And so when,
I guess, to be honest, when I started to get
frustrated with how hard it was to train clinicians
in evidence-based practices, I decided, well,
why don't we just cut through the middle man
and put these interventions on smartphones
and see whether or not people would, if this
was another way of increasing access to quality
of care.
>> Cool. And Gary, where did you, what inspired
you to be thinking more about dissemination
and implementation research?
>> You know, well, I think I entered the field,
I think, for reasons that are similar to many
people, you know, inasmuch as I was frustrated
about the effort and time and intensity of
our work and the strength of our outcomes
and the inability to see those tools translated
into the hands of real people. And I think,
you know, I didn't, didn't find the field
of D & I research right away. I found it after
I got involved in startups. And so, you know,
I have long had an interest in translation
via commercialization. And an early company
that I was involved in after, after growing
that company a little bit, I took a short
leave from academic position and worked inside
of the company that ultimately acquired our
startup, which was a large disease management
company that had 20-some odd million lives
under management. And, and in working there,
I got to really see the process of, from,
of dissemination from a very different perspective.
And we were involved in disseminating the
innovations that we created and that were
evidenced based. And then when I was working
in industry I had the chance of, to be involved
in the process of identifying how to, how
to locate evidence based tools, how to get
them into the hands of our clients at the
time, and then how to study their outcomes,
all in the context of industry. And to be,
to be really honest, I was blown away by how
similar the processes were in industry and
in my academic hat. The currencies of, for
judging one's success is a little bit different,
but the, but the actual process and the conversations
and the intentions and the goals were very
similar. And so then when I came back to academia
I found the field and was really, was really
thrilled about it. And I viewed D & I science
at the time as a way of, as hopefully, hopeful
that I could find a place that would help
us to better disseminate the kinds of tools
that we'd had some success doing through these
other approaches. And, and so I've been, been
very happy to be a member of the community
ever since and have been, and still, as I've,
as digital has grown, I think, in some ways,
it's lagged other areas in the D & I research
community for some reasons I think we'll get
into in just a little bit. But nevertheless,
I still, I still hold true to that, that initial
interest in using D & I science to actually
do, accomplish a goal of really disseminating
and getting those innovations into the hands
of, of patients and providers.
>> Cool. You know, I found that, just in talking
with investigators, that often it's hard to
try and identify, beyond just saying we want
to create an app for this or an app for that
or some sort of new technology, to try and
pinpoint, what are the key research questions
that are most important to tackle. And we
have that potential to just keep creating
a whole bunch of new widgets that, that may,
you know, further exacerbate the challenge
of who actually has access to any of them.
I wonder when, when each of you jumped into
this area, how did you sort of come up with
the specific research questions to tackle
in this space of technology and health? And,
Gary, maybe you can start us off.
>> Sure. So we're fundamentally interested
in how to improve the treatment of obesity
using these kinds of low-cost digital devices
in the primary care setting. And we work almost
exclusively in community health centers. And
so our, you know, the questions there from
a, from a D & I perspective, really concern
questions of engagement. How do we, how do
we get providers to use these tools? What's
the nature of their use? How does it, how
does it interfere, in some cases, with their
work flow? And then we have, of course, similar
parallel questions for patients. Are patients
using the tools? Are they using them in the
ways that we expect? What are the roadblocks?
What the challenges to their, to their use?
What are outcomes that are produced? What
are some of the unexpected outcomes that are,
that are experienced in the context of these
kinds of treatments? We've been guided in
our work mostly by RE-AIM. So we've, we've
used RE-AIM to try to help us to formulate
questions, important questions related to
reach and representative to effectiveness
and adoption, implementation, and maintenance
and so on. And so it was, it was really critical
for us, in the early stages of our work, to
have a framework like RE-AIM to guide our,
to guide our efforts. And that's taken us,
that's taken us a long way. I would say that
as we've, as we've gotten more into the work
in recent years, some of those, some of those
questions have changed. I think these days
we are, we're getting much more interested
in questions of cost and questions of long-term
sustainability. You know, the, the, the sustainability
of digital health interventions, particularly
those that are patient facing and that are
not integrated into the electronic health
record raises a whole host of questions from
a D & I perspective that are, that are, that
are different than those that I think face
other, other areas. Who's going to host these
servers? How much uptime do you need? What
are the privacy considerations? What are the
customer service and technical support needs?
A wide range of actual, of implementation
questions that really have to do with the,
the hosting and the maintenance of the technology
itself. So we're getting more and more interested
in those sets of questions. And then at the
same time, we are starting increasingly to
move into -- actually one of the observations
I think that we made in our early work is
that, you know, we tend to try to situate
our technology at the nexus of patient provider
and the system itself. And so we really see
our technologies as sitting amidst of those
various stakeholders. But in some ways, that's
somewhat of a counter-intuitive way to think
about digital. I think the way that most of
us tend to think about digital interventions
is that we tend to think of them as the apps
that are on our phone. They exist in a stand-alone
capacity without a lot of human support, without
integration with providers. And so in the
last couple of years we've gotten much more
interested in this question of how do you
disseminate actual standalone treatments?
And actually a lot of good outcome evidence
for these kinds of tools, and we have gotten
more interested in your question, David. If
you're not going to just build a new app,
what do you do to get an evidence-based treatment
disseminated through a digital strategy? How
do you do that without a human being involved
and how do you do that at scale? And so that's
a, that's a new line of work for us that we're
starting to investigate using a wide range
of new methods that I'm happy to talk about
if there's interest.
>> Pat, how do you identify particular questions
to focus on?
>> Yeah, I mean, the technology for me is
really just in the service of addressing access
in quality questions, right? So, you know,
when I start to think about, when I started
thinking about the use of technology for improving
access to care, you know, it was really about
this issue of how hard it is for, in my field,
which is, you know, treatment of depression,
how hard it is for, you know, people in this
day and age to actually access care even if
they wanted to. So, you know, we were getting
a lot of, you know, as we were doing kind
of, like, nonresearch implementation of strategies
like collaborative care, and I would be on,
you know, to help, you know, care managers
do their, like, teach them how to do their
job. You know, I was very aware of how in,
in, you know, how hard it is that even though
you may have taken away the stigma by integrating
mental health services into primary care medicine,
you still didn't correct the problem of how,
for behavioral interventions, you need to
come in and see somebody once a week. And
a lot of community agencies were really struggling
with this model, particularly in some communities
like the Latino community in California where,
where we would be working with clinics that
were serving undocumented workers. And they,
you know, typically, you know, they were the
choice between I have a job today, you know,
that somebody offered me, or I come in for
my appointment, they're going to take the
job. And so really, so I started to think,
like, well, how does technology take care
of that particular issue for the participant,
you know, for the patient. You know, how can
I extend the reach of the clinician? And traditional
models have been, like, using human beings,
like, you know, you know, field workers or
health workers to kind of go out and check
in on people. But, you know, when I start
to think about the fact that that's, that's
expensive. Those people also need to be trained.
They're hard to identify. It's really hard
to find people who, for instance, are bilingual
and who can do this kind of work. And so couldn't,
you know, given the fact that people, these
are, these are people who had technologies,
could we transfer some of the tasks to the
phone or could we transfer some of the tasks
to the Internet to help check in on people,
make sure that they're, that they're doing
better? Another thing, too, is that for mental
health our treatments and our assessment tools
are cumbersome. And there's a lot that technology
could do to help kind of get rid of some of
the multiple decision points that clinicians
have to make in terms of, like, you know,
when I see this kind of patient in front of
me and I try this intervention and it doesn't
work, what's next for me to do? The automation
that machine learning, you know, in some cases
natural language processing, can facilitate
that making that decision-making quicker for
the clinician just seems like it would help
a lot with the quality. So I guess the short
answer to that question is that I'm always
thinking about, you know, what the, what the
struggle is that the clinician and the patient
is having and utilizing these services. And
that just because new technology has really
boomed in this area, I've naturally gravitated
towards what technology can take care of.
I think Gary raises a really good point about
how much technology can actually do and how
much needs to be supported by human beings.
We recently, well, a year ago, completed this
very large-scale, remote, randomized clinical
trial where we randomized people to three
different depression apps and followed people
for 12 weeks to see what their outcomes were
like. And this was originally designed as
a feasibility study where we only had intended
to recruit 150 people. But we hit that number
very quickly. We hit 150 in the first week.
And so we got permission from NIMH and from
our IRB to continue doing the study. And,
basically, in a matter of five weeks recruit
-- we screened 3,000 people and randomized
about 1,200 people into this study. And what
was interesting was that the, our reach was
that, so impressive that we, our demographics
of our sample maps on perfectly to the U.S.
Census Bureau's estimates of minority status
in the United States. And the trick is, though,
it was really hard to keep people engaged
after a couple of weeks, after four weeks.
They showed really nice improvement, which
says something about potentially the power
of these interventions. The fact that they,
you know, somebody can use them when they
need them. They don't have to wait a week
before they see a, quote unquote, get a therapeutic
dose of something. But at the same time, after
we'd done focused interviews with some of
the participants, there was, there were some
people who said it would have really helped
if I had somebody to actually, you know, communicate
with, even if it was through SMS or text messaging
or IM to ask some questions about how I was
struggling with using the app. So I don't
think we'll ever replace human beings completely,
but we may be able to increase their efficiency
through the use of these interventions.
>> I'm going to jump in really quickly and
just say thank you so much to those who have
already submitted questions. I've gotten a
couple through the chat. You're welcome to
use that or the Q&A feature. And I'm putting
them into queue, so I'm going to let David
keep going with some of the questions we've
slated already. But definitely send those
in. I'm recording them and will ask them in
just a little bit.
>> Absolutely. Yeah. And thanks, both of you,
for the answers so far. So, you know, I know,
I think the first computer that I was exposed
to was an Apple II. Prior to the Apple II
Plus, and at some point I did get a crack
at the original IBM PC. Obviously, things
have changed since then. I was just curious
if each of you, over the course of the work
that you've been conducting, the research
that you've been conducting, what you see
as some of the major changes in technology
and how that's influenced your thinking about,
about research to pursue. Gary, you want to
start us off?
>> Sure. My first computer was an Atari 2600,
and to move to a 2 and 2e and 2gs. And, yeah,
so it's, it's been a long, it's been a long
road. But, you know, I think I'd say the technology
has changed markedly, although, you know,
I think I have a bit of a polar opinion on
this topic. I think it's, you know, in my
experience, hardware changes very, very rapidly.
We've seen enormous hardware changes in, say,
the last ten years. Software changes a lot,
too, but differently, in ways that I think
are different than hardware. You know, we're
all still using Microsoft Word. And we're
all still using the same, a lot of us are
still using Internet Explorer, I'm sorry to
say. Software packages don't change quite
as quickly. If you look at the leading, say,
smartphone apps for fitness and weight loss,
they have remained, the top five have remained,
acquisitions aside, have remained pretty stable
since the release of the, of the app store
in 2008. So particularly if one is keeping
a code base, language has changed, right?
If you coded for the POL operating system
a while ago, that's gone. So language has
changed, but if you, if you do a good job
about keeping your code base updated and importing
that code base into new languages and new
platforms as they emerge, it's, it's somewhat
less, the changes are somewhat less challenging
on the software side. And so that's one of
the reasons I think it's really important
for those of us, particularly in the behavioral
science community, to be very thoughtful about
designing technologies that are largely, designing
software and thinking about software and designing
that in a way that is more or less platform
agnostic. And there are lots of different
strategies for accomplishing that. I think,
you know, one of the trends that we've seen,
we've obviously entered a mobile revolution,
and that's done a lot of things, not the least
of which is democratize access to the Internet
in ways that I think we couldn't have imagined
20 years ago. But the telephone as a treatment
tool, as a digital treatment tool, I think
is enormously important. And in this case
I actually don't mean smartphones. We use
interactive voice response, which is like
an automated telephone call, and text messaging
with great success in medically vulnerable
communities. We have near, near, you know,
100 percent penetration of those technologies
in the samples in which we work. And the engagement
with those technologies is extraordinarily
high. We're just finishing a trial now, a
two-year randomized control trial of a one-year
long weight loss treatment intervention in
community health centers. And we asked patients
to use our app, which is basically, involves
speaking to an interactive voice response
call once a week and getting automated, tailored
feedback and skills training through the phone.
And at the end of one year, we had a median
engagement rate of 93.2 percent, which essentially
means that people took their calls a lot.
And those kinds of technologies, you know,
we can, we've had a lot of success in smartening
what are relatively dumb technologies. And
that has become, I think, a really important
way of us capitalizing on the digital revolution.
I'd say one, one other quick change that I've
noticed that's very important in our world,
and that is that the nature of interfaces
has changed a lot. These days most of the
folks that we work with in medically vulnerable
communities have never used a keyboard. They
have only used a touchscreen. And so we use
tablets as a matter of course because people
don't have experience with mice and keyboards.
And the thing that I think is happening now
that's going to change things immensely going
forward and, I think, even make our lives
even, even more interesting, is the emergence
of voice as an interface. And we're beginning
to get involved in that as well. Tools like
the, you know, Amazon's Alexa, the Echo, and
other types of voice entry tools I think are
going to make it much easier to put technologies
in the home close to people and to get a lot
of the, you know, sort of challenging data
entry interfaces out of the equation.
>> All right. Pat? Thoughts about how technology
has changed? A couple of thoughts on that?
>> Yeah. I don't have much more to add to
what Gary said because I would agree a hundred
percent with everything he said, that. You
know, I do think that one way that we might
be thinking about different kinds of technologies
that we haven't thought about as therapeutic.
You know, I might just add a couple words
there where I have been working with a company
that has created, you know, basically taking
cognitive remediation strategies that we,
that we know can enhance cognition in people
with depression and attention deficit disorder
and popping it into a video game environment.
And that that kind of work is just, you know,
and not -- and I don't mean, like, you know,
some companies where they just take these
cognitive tasks and they don't, you know,
all they do is make them pretty. In this case
it's using technology as well as, again, like,
engineering mechanics to create really challenging
cognitive training games. But put, but because
they're so, they tend to be so hard and tedious,
putting them in a video game environment that
can enhance the enjoyment factor of doing
that cognitive training. And I think, you
know, that's just starting to emerge as a
potential way of, you know, potentially engaging
children, engaging, you know, maybe young
adults who, that's how they like to interact.
And so putting therapeutics into a video game
might be a way of really enhancing treatment
access. My colleague Skip Rizzo at University
of Southern California has done a lot of work
with video game environments for things like
exposure therapy to PTSD as well as creating
virtual therapists. I don't know how successful
the virtual therapists might be, but certainly
what Gary was pointing to using tools like
Alexa or Google home might be more palatable
because, you know, the fakey look of a therapist
might be disconcerting, but simply talking
to Alexa or hearing the voice might be another
way of enhancing access to care. So, and then
I know that there are colleagues of mine here
at University of Washington who are starting
to explore virtual reality and augmented reality
technologies for things like cognitive training,
potentially, you know, social, for kids with
autism, you know, social connection, social
communication. But using sort of these immersible
environments to really make that process feel
real to them.
>> Great. Thanks. So we're seeing some wonderful
questions coming in. I'm going to sort of
combine a couple into hopefully one that doesn't
anger either of the people who've questioned,
who've submitted the question. But it really
comes with, I think it reflects, at least
in one case, off of something that Gary had
said about some of the similarities of developing
and testing tools between research and industry
that you had found. And that question, as
well as another, asks this thing, asks about
how we know that typically biomedical research
and that funded by NIH is relatively slow
moving, and yet technology is so rapidly changing
and that industry often has the ability to,
you know, to be iterative. So the question,
at least, that comes out of this, are there
things that we can do in thinking about the
funding of various research studies that better
support the evolution of health IT given its
rapid move? I don't know, Gary, do you want
to, have any thoughts about that?
>> Just a few dozen. But, I, you know, I think
it's, I think there's a lot, there's a lot
to say here, unfortunately. But let me try
to be concise. I, you know, it's true that
industry tends to be agile and quick, and
moves quickly. The other, the other thing
that I think we sometimes don't give credit,
industry enough credit for is that industry
also tends to be very attentive to data. And
so some of the most impressive data collection
infrastructures I've ever seen have been in
industry. You'd be hard pressed to argue that
Facebook doesn't have a top notch data science
core. And they tend to make evidence based
and data, data informed decisions as a rule.
That's been the case for many of the startups
that I've worked with or that I've encountered.
It's just that they tend, and, you know, and
some of these startups, many of them even
subject a lot of questions to randomized controlled
trials. Again, you know, if you load up Facebook
right now, you probably are in a randomized
control trial testing some design feature
that they are evaluating. The cases that,
you know, and certainly not all companies,
those companies that don't have the scale
of Facebook certainly are not doing RCTs,
but they tend to be very, very attentive to
data. And yet those data tend to be observational
data and data that are, perhaps, not collected
with the same kind of sophistication or granularity
that we would be, that we would be comfortable
with. And I think that's a message for both
sides. You know, perhaps we'll get a chance
to talk about partnerships later. I think
partnerships are critical in this space. I
think we as a field and funders have to get
more comfortable with non-RCT-based designs
for examining outcomes. I'm part of a startup
right now where we have one of the world's
largest collections of data on weight. And
we're able to model with a high degree of
granularity trajectories of change in weight.
But that's all observational data that probably
wouldn't pass muster for a lot of, a lot of
folks who would be reviewing grants of this
type. And so I think we're going to, we'll
have to, funders will have to be, help us
to guide us to a greater appreciation for
different research designs that might allow
us to answer questions or to detect signals
in something like a more, more rapid way.
And I think, you know, the nature of digital
technologies, digital health software design,
modern software design is that iteration is
absolutely key to that. We, it's totally insufficient
to imagine a digital technology that's created
and rolled out without some expectation of
iteration and redesign on the basis of how
it performs in the hands of its users. And
so I think we have to get comfortable with
designs, maybe multipart studies, more use
of single case designs, a range of different
adaptive trial designs that will allow us
to exploit that, those kinds of features.
In general, I think, you know, we, we, the
funders I think in the space, we have to have
funding mechanisms that allow us to get closer
to what real dissemination looks like, to
minimize the gap between dissemination science
and real dissemination. And that is one of
the, this is, this is one of those areas.
>> Pat, suggestions that you have or thoughts
that you have about, about, you know, how
to fund this kind of research in a more sort
of flexible iterative or otherwise way.
>> Right. So we, we spent, so I co-chaired
a workshop, a workgroup for NIMH on this very
topic about, you know, using technology, how
to support technology based research. And,
you know, we talked a lot about this. And
Gary is right on a lot of fronts that there
are a number of research methodologies that
we could be utilizing and learning from, you
know, the areas of, like, human centered design
and computer science and engineering that
would help us, you know. I think it's really
more on the reviewer and the program and to
being a little bit more tolerant of the fact
that if you're going to be using or building
a technological solution that there's actually
a process that doesn't look like what we're
typically used to in health sciences. So education
about that. And allowing some flexibility
in appreciating the design differences is
important. But another, you know, another
way to look at this, too, is that, you know,
we talked a lot about in the workgroup that
we're not trying to -- if you're not trying
to build a product, which is what a lot of
companies are kind of focused on, what's my
product, but you're really testing a principle,
then the idea of doing a clinical trial, you
know, like, for instance, say you proposed
to do a clinical trial in a health setting
using technology to support a principle of
implementation. So, for instance, improving,
using technology to improve the quality of,
of clinician care, that the fact that the
technology itself might change because of
advances in, I don't know, better battery
life or, you know, or just basically the interface
looks a little different. But you're not fundamentally
changing the principle that we need to be
a little bit more tolerant of allowing the
technology to move during the course of the
study while you're still -- as long as the
principle that you're testing itself hasn't
changed, that that's okay. That it's not fundamentally
going to break or change your outcomes just
because, you know, you moved from, like, a
laptop to an iPad. But the intervention itself,
the principle, that's still being tested.
So I think it's one of those things where
we used to struggle a lot with this, right,
at NIMH about, you know, at what point does
an adaptation or a change become simply just
a minor tweak versus a complete overhaul of
an intervention? If you just keep translating
things or testing things in one population
or another but the intervention itself is
still treating depression, you know, at some
point that becomes very iterative. So if,
if for whatever reason Epic changes the way
that they organize their house record but
the principle that you're testing is still
the same, we should just not worry about that.
So I'm not sure if that was clear, but it's
really about separating out the principle
that you're testing. Like, what is the barrier
I'm testing? What is the solution that I'm
testing? And it's simply, and it, not worrying
about the fact that the technology itself
that's supporting it might, you know, change
over time because that's the way, that's what
happens with technology, then I think we're
okay. So a lot depends on kind of the question,
the purpose of the study. If you really are
building a brand new product, yes, we have
to be much more flexible about the design.
But if you're really just testing a principle
and technology is just supporting that, then
I wouldn't worry so much about the fact that
technology changes.
>> And, yeah, and I think it, that reminds
me of what Gary was talking about earlier
and the idea of trying to be platform agnostic.
And, you know, it sounds like the principle,
you know, focusing on testing principles versus
testing on the particular widget in its current
form with this current population could spin
out infinite numbers of studies if we're not
careful. Great. So wanted to ask, because
I think there's been a lot of, a lot of interest
in how best to navigate doing research in
this space and thinking about the partnerships
that are needed, wonder if each of you could
talk a little bit about experience you have,
suggestions you have for folks in this area
as to who to partner with, how to, how to
consider different partners. Maybe Gary, we'll,
we'll have you get the first word.
>> Sure. Well, I think, you know, there's,
there's the work that we tend to do, again,
it tends to be patient facing and largely
we tend to dock into the electronic health
record but we tend to, our interventions are
generally just outside of it. So with that
in mind, you know, it's been critical for
us to have very strong relationships with
health systems. And we have a long standing
relationship with a fantastic network of community
health centers here in North Carolina that
are really interested in, not necessarily
interested in testing the latest, greatest,
fancy new shiny technology, but really in
trying to understand better ways of leveraging
technology to improve their quality of care,
the efficiency of their clinical practice,
and to help to extend the clinical encounter
to accommodate patient needs that are largely
outside of their, kind of, the more acute
crises that they tend to see. So, you know,
the health centers that we work in generally
have, you know, extraordinarily high rates
of diabetes and obesity and all the sort of
related conditions. But treating weight and
doing weight counseling and physical activity
or emotional counseling are really largely
outside of the, of their core abilities in
the context of any given clinical encounter.
So, so they're very interested in using technology
in that way to sort of extend clinical encounter
and deal with some of these conditions outside
of, outside of the clinic. And so to that
end, I think it's just been instrumental for
us to have a strong partner that's very interested
in technology and interested in digital in
this way and that affords us to, some access
to their core data resources so that we can
do interesting things with the data. And that's
been, that partnership has been really instrumental.
You know, I think, for the field, I think
one of the big challenges for us is that the
primary adopters of digital health technologies
today tend to be industry. And they tend not
to be health systems and pairs, they, you
know, they certainly are adopters, but the
primary adopters tend to be vendors of one
sort or another. And we know very, very little
about the kinds of considerations that go
into their choices to adopt a given technology.
And we don't really have a strong sense of
how a given wellness vendor or an electronic
health record, company, or patient portal
company or HRA company, the primary adopters
these days of digital tools like the ones
that I create, startups, we don't really have
a clear sense of how they, of how they consider
evidence in, amidst the range of different
adoption considerations that they might be
working with. We don't have a sense of how
they consider things like cost and what kinds
of cost savings, returns on investment, what
kind of outcomes they might be interested
in. You know, I'll tell you, from my industry
experience, you know, the actual clinical
outcomes are absolutely critical. But they're,
they sort of fit amidst a wider range of outcomes,
things like, you know, satisfaction and quality
of life and whether or not a given treatment
is going to reduce membership churn and engagement
and other kinds of downstream things like
medication adherence that might be largely
secondary to whatever kind of treatment you're
delivering. You know, adopters tend to have
a much more holistic view as opposed to looking
for change on any clinical endpoint that might
be something that we write into a grant. And
I just think in general, we know very, very,
very little about what those adoption considerations
are, particularly as it relates to adopting
what can tend to be very expensive digital
technologies. So I'd love to see more partnerships
of that type so that we can get more data
to help to make that, that process, the dissemination
process, more efficient.
>> Well, Pat, can you talk a little bit about
partnerships that you've learned from, benefited
from, or encouraged people to seek out as
they navigate this space?
>> Yeah. In fact, I can even give a little
advice about what to look for in a partnership,
too, because I've worked with companies. I
still work with a couple of companies. I've
worked with people from different disciplines
that are, you know, and a lot of who you partner
with will depend on what you're trying to
do. So, for instance, any, anything that I've
done where, you know, this is a tool that's
going to be used by a health plan like, say,
Kaiser Permanente or, you know, Group Health
Cooperative that has an electronic health
record, I automatically have to, you know,
make sure I'm working with somebody who understands
electronic health records. So that means working
with somebody from bioinformatics. And, you
know, sometimes, I have a comment about how
to work these relationships but, you know,
often times people who are in bioinformatics
are mostly interested in how to use these
large-scale data sets to answer questions,
right, about, particularly, you know, about,
like, making treatment decisions and so forth.
But they also understand a lot about how do
you take a tool that would facilitate the
quality of care and have it interoperate with
the electronic health record? So, you know,
they have their own approach for doing that.
It's not something that I feel like I have
to sit down and study and learn. But as long
as they, we know, you know, what the outcome
is, how we want this new tool to work in the,
be embedded in the workflow of clinicians'
life, that, you know, they can help immensely
with thinking about how that would look in
the electronic record like Epic, for instance.
And Epic is not easy. And everybody will tell
you that. And the lore around here is that
you build a tool for Epic and you've built
that one tool for Epic, and you've spent all
your money and have gotten very little out
of it. So there's still a lot of work that
needs to be done in terms of how you work
with companies like, big companies like this,
like Epic. But, a bioinformaticist and a computer
science, and a computer science engineer can
be really helpful in thinking through those
things. When it comes to designing new, like,
like, for instance, in my area, behavioral
interventions, what has been really fascinating
to me is working with people who do user-centered
design work. And so I've worked with, I've
worked with a company called IDEO, which is
a huge design company. They do, they design
everything from toilet seats to, you know,
health apps. And, you know, and their approach,
which fascinated me, is very much what I used
to do with, you know, participant action research,
which is really getting a lot of information
about, in their case, you know, the user,
the context the user lives in, what the user's
values are, and then designing around, you
know, around the user and making sure that
anything we develop has low burden, you know,
is easily accessible. If people's values and
goals change, so, for instance, I work with
some people who do food diaries, you know,
and they've, they, here at the University
of Washington, and they've been really, you
know, basically pretty clear that people's
goals when they use food diaries change over
time. So that means that the food diary itself
has to be flexible and change over time. My
colleague, David Mohr at Northwestern is very
much building on that principle of, for depression
by creating these really mini kind of, like,
mental health widgets that people can select
and choose and put together however they like,
you know, basically because what people want
one day is different from what they want from
the next day. So, you know, partnering with
-- you don't have to design, partner with
a design company. IDEO is very expensive.
I wouldn't necessarily recommend that unless
you have a huge grant. But, you know, there
are a lot of, like, you might find colleagues
in your academic institutions, you know, and
these are people who are usually either in
an art department or they're in computer science
and engineering who, that is their, that is
their world. They study human behavior from
the perspective of how do we design things
so that they're usable and people will actually,
you know, hook into them and not drop them
after a couple of weeks. So, you know, human-centered
design, I really, bioinformatics, I really
love working with people who do big data analysis
because the products that they can come up
with to help support decision making, you
know, would be, is going to be really fascinating.
I worked with a company called Ginger.io for
a very long time. And their initial tool -- and
so this gets into, like, what it's like to
work with a company. Their initial tool was
very much about mobile sensing of, and mood
prediction. So could, they basically came
up with an algorithm that, that can predict
whether or not you're going to be depressed
in a couple of days or you're going to experience
more depression based on physical activity
and social connectedness. Like, how much social
media do you use, how many emails do you return,
how many text messages do you respond to or
send out. And it was great because they had
this built-in tool. We could plop it into
our study, and we got some really nice, you
know, relational data. The down side, though,
is that their company, we couldn't get any
raw data. So it was impossible for us to do
our own modeling of, you know, predicting
depression outcomes based on these metrics.
We had to rely completely on the company and
just trust that what they were generating
for us was real. And then they, for one study,
they basically had to back out because they
changed their business model. And this is
not unusual for a lot of startups who are
have angel funding where the VC or the venture
capitalist might change their mind about how
willing, how long they want to wait for you
to develop your tool and say you need to sell
something now. And so that's what we kind
of faced with, with a couple of companies
where, you know, right when you're ready to
work with them, they're no longer available
to you either because they've changed their
business plan or because they've gone bankrupt
and they're not there anymore. So you have,
I think it's great to work with companies,
but I think you have to be really careful,
too, about, you know, how stable they are,
how stable their product is, and whether or
not they're open to sharing some of the raw
data that they collect to help you with your
science, how collaborative that is. So that's,
I would say, is one of the caveats of working
outside of academia. The plus side is that
they're nimble and they have, they often have
a lot of immediate money to spend on doing
something really beautiful that you probably
couldn't afford on an NIH-funded proposal.
And so you do get these really nice tools.
But in the end, it's kind of hard to publish,
for instance, data when you can't even talk
about the algorithm that went into coming
up with the outcome.
>> Great. So we only have a few minutes left.
And I'm conscious of trying to get in a few
more of these questions that folks have offered
to us. So one of the them notes that Gary
had mentioned RE-AIM as a helpful framework
for generating research questions. Pat, I
wonder if you might be able to just quickly
reflect on any other frameworks that you've
found uniquely helpful in informing implementation
research in this area, in the area of technology?
Any frameworks pop up?
>> Well, you know, more recently it's been,
you know, participant action research and
user-centered design that so much of what
the interventions that I try, have been trying
to implement suffer from design flaws. And
so the, a principle of really thinking about
what is it that the consumer and the clinician
want out of the, you know, therapeutic interaction,
what's their workflow like, really thinking
about, you know, the context in which they're
doing treatment, I find, is super important
and very helpful in designing new solutions
for implementing best practices.
>> Cool. Another question came in whether
either of you have experience using hybrid
designs to test implementation of technology-based
interventions?
>> So, do you mean, like, hybrid efficacy
effectiveness or hybrid effectiveness implementation?
>> I would assume it would be a latter.
>> The latter.
>> Yeah. Effectiveness implementation.
>> Yeah. I personally have not, I don't think
I've done that yet, no.
>> Gary, have you?
>> Yeah, us either. No, we haven't.
>> Okay. But it's a great opportunity for
folks who are listening to jump in to that
space. Wanted to get your advice. So we have
folks listening at various stages of their
career. Advice that you have for people who
are earlier in their career who are contemplating
a career in dissemination and implementation
research, particularly as it relates to technology.
What advice do you have for folks? Pat, you
want to start us off?
>> Yeah. I would say go for it. This is really,
for me, I feel like this is the place to be.
And, you know, the big part is really making
the, being a connector and making those partnerships
with people. That's going to be, you know,
whether it's the community organization or
it is with other scientists who can help you
think through the problem you're trying to
solve from their perspective, is really critical.
So it's, I think this is exciting times, and
particularly with, you know, how much healthcare
technology has blown up and become an important
thing. There's a lot of avenues and a lot
of opportunities for you to do really good
science in this space.
>> Gary, advice for folks?
>> Yeah. I totally agree. This is a wonderful
area, and it's very exciting and fast moving.
And it's one of the, you know, I think if
one has interdisciplinary leanings, this is
one that can allow you to kind of fully exploit
those interests. I, you know, just a few really
specific suggestions. I think, I think that
folks in this space should learn, learn tech.
You know, you should learn a little tech.
It doesn't mean that you need to be a coder
or, you know, take an advanced Ruby class.
But, you know, I have to say, like, you know,
very few of us many, many, if not most of
us, you know, work closely with biostatisticians
to do our, work on our analyses. But we wouldn't
imagine guiding a set of analyses without
having taken some classes in stat. And similarly,
I think it's really, it's important here to
have a sense of, of technology. There are,
the good news on that is that it's easier
than ever to be able to get up to speed in
just very basic coding, basics. There's a
fantastic series of free webinars online that
allow, that are sort of framed around technology
for non-technologists. So I think there are
very easy ways to ramp up in the space, but
it's it does behoove you to do that. I'd say
the other thing is that this, this field doesn't
exist purely in scientific journals. And the
sort of, the scholarship in this space doesn't
only exist there. And, in fact, I think, given
the youth of the field and of the speed with
which it's moving, it's important to really
look beyond the academic literature. It's
critical, I think here, to read blogs, to
listen to podcasts, to get on Twitter. A lot
of the discussion about new technologies,
new approaches, emerging trends and those
things really appear in those kinds of places
first. And then the other thing I'd say, I
think Pat's point about getting out and getting
networked is a really importantly one. And
I think beyond the network of scientists in
this space, you know, there's a lot of folks
who are in various incubators around the country.
And in networks of health information technology,
there's a lot of kind of local and regional
networks that have a variety of different
stakeholders in them. I think getting connected
with those types of communities are just really
very important to form the partnerships that
will lead to great science.
>> Thanks to both of you for joining us at
the virtual fireside. Turning things back
to Sarah to close.
>> So I'd like to go ahead and thank everyone
for their time and attention today and especially
thank Gary, Pat, and David for such a vibrant
conversation. [Inaudible] and we encourage
you to complete the online evaluation. A link
to the survey will open in a new window once
discussion has concluded. Also, an archive
of today's session will be made available
on our website in about one week's time. Feel
free to check that out. We hope to see you
at our next session which will be held in
April. Registration details will be shared
in the coming weeks. Thank very much for joining
this webinar. You may disconnect at this time.
>> Bye.
