[music]
Alexander A. Bankier, MD, PhD Hello. I’m Alex Bankier the Deputy Editor for Thoracic Imaging of the journal Radiology.
Welcome to today’s podcast.
We are welcoming Dr. Paras Lakhani from Thomas
Jefferson University Hospital and today we
will be discussing the article that is in
the current issue of Radiology “Deep Learning
at Chest Radiography: Automated Classification
of Pulmonary Tuberculosis Using Convolutional
Neural Networks.”
Welcome Dr. Lakhani.
Paras Lakhani, MD Great, thank you for having
me.
Before going into the core of your article
could you very briefly explain to our listeners
and to our viewers what these convolutional
neural networks are?
Yeah it’s a – there are neural networks,
there are artificial neural networks that
have multiple layers and they’re very good
for image processing so they’re good at
analyzing images.
I see.
How does this connect to deep learning, how
do the two concepts interact, and what brought
up the idea to you or to your team to apply
this technology to the diagnosis of tuberculosis?
Sure, yeah so deep learning is sort of used
to refer to these because the networks have
multiple layers.
So as opposed to simple or artificial neural
networks that might have one or two layers,
these have many, many layers so it allows
for more types representation than say a basic
neural network.
So that’s referred to as sort of deep networks
or deep learning and then what inspired me
to use this for tuberculosis was two things.
One, tuberculosis is a major problem; availability
of public data sets in chest x-ray for tuberculosis;
and then three, just the success with these
deep learning techniques for image classification
in general outside of medical imaging.
The general approach to using these techniques
in a diagnostic concept is quite similar in
fact to when you would test more conventional
imaging techniques.
You have a test set, you have a validation
set, and then you calculate diagnostic accuracy.
Is this true?
Does this, in other words what makes the difference?
One thing for example or one element of all
this deep learning approach is that you need
huge amounts of imaging material, huge amounts
of data, does this pose problems?
What are the challenges related to that and
what are the potential ethical issues related
to that?
Yeah, that’s a really good point.
You know with deep learning having more data
is really important.
But you know I guess in theory we have lots
of data, it’s just a question of anonymizing
it, de-identifying it, making it available
to researchers.
A lot of research is being done you know at
an institution level and I guess some of the
challenges when you deal with a lot of data
is annotating it and annotating it accurately.
So making sure that people look at it, maybe
not just one radiologist, but multiple radiologists
ideally, but maybe path correlations, all
of those things are very important.
You found or the findings of your study were
that one algorithm that you used did quite
well.
The second algorithm that you used did quite
well, but when you put the two algorithms
together they actually that was the best performance.
What in your view are the explanations for
that and what are the implications for the
future work on the neural networks?
Yeah, that’s a really good question.
You know for this particular work, one of
the algorithms which was just AlexNet with
that I noticed found more positive cases.
So it was good at finding the positive cases.
And the GoogLeNet which was the other algorithm
was good at finding the negative cases.
And so the blend was kind of a nice balance.
When you’re dealing with algorithms like
these it’s been shown that if the algorithms
have sort of a slightly different viewpoint
in how they look at the images, that when
you do blend them you might get kind of the
best of both worlds.
So it did help a little bit in this case yes.
So is combining two or more of these algorithms
the future?
I think so because you’re seeing so many
models that are available these days, and
in fact you can even develop your own models
or you can start with preexisting models,
but there are just so many models and the
architectures differ, and so I think that
blending algorithms will be sort of what we’ll
see a lot of and that word is call ensembling
and it’s a straight-forward technique, but
I think it’s pretty effective.
One question on a more general basis, I have
heard of radiologists who feel threatened,
challenged by the introduction of these deep
learning techniques into radiology.
Are they right to be threatened, and how will
the introduction of this new technique reshape
radiology and redefine the role of the individual
radiologist?
Yeah I think a lot we’re going to find out
over time; you know the next five or ten years
of how far we can push these algorithms, but
right now we’re at a stage where I think
anything that takes one or two seconds of
thought for a radiologist probably could be
replaced by a computer, but things that require
a lot more thought are going to be difficult
to replace.
So I think there’s a lot of value for radiologists
and especially for challenging cases, but
if it’s just describing findings like a
nodule is present or a fracture is present,
I think that a computer can do that.
But if you’re trying to figure out a lot
more than that and then you know even communicating
findings for firm positions, talking to your
colleagues in front of the tumor board, there’s
so much value that we provide.
So I think we’re going to be using these
to help us, but I think they’re still going
to provide value in many ways.
I see.
One of the algorithms that you used and this
is a kind of like white elephant in all these
deep learning discussions comes from a big
company, Google, that so far has you know
has not been in the field of medical imaging.
Does this introduce some new sources of bias?
Does this introduce some new sources of ethical
considerations that we as a profession have
to reconsider?
Well I guess these things are open source,
so anyone has access to them.
Right now they are you know many newer algorithms
being created.
A lot of them are by major companies.
Microsoft research has come up with one called
ResNet which was like the 2015 image net champion.
You know Google has even bested their old
work with something called Inception B3 and
now B4.
So the point is like a lot of these big search
engine or big social media companies are investing
heavily in this work because it does benefit
what they’re interested in doing, but yeah
to answer your question does it have any ethical
implications, I think as long as the data
is shared openly among all sorts of researchers,
then that’s a good thing, but if data is
private to a particular company, then that
might have some ethical implications.
Thank you.
Your article on the implication of deep learning
to the diagnosis of TB will obviously be published
in 2017.
If you extrapolate to 2025, where do you see
this deep learning and neural network approach
in radiology, how will it look at that time?
Wow I think you’re going to see a lot of
different approaches.
One, you’re gonna have a lot more tools
to help diagnose, quantify, you know quantify
different body parts such as like lung nodules
in the chest or even entire organs.
We’re gonna see it used for dose reduction.
Meaning we’re going to be able to generate
images let’s say from CAT scans that look
like regular dose images or images that are
obtained from a regular dose protocol but
they’re actually reduced dose.
So basically model reconstruction or reconstruction
technology will improve.
We’re going to see it help us in reporting.
It’s really endless what we’re going to
see it in and I think it will hit our field
probably faster than others.
Okay.
Thank you very much for this conversation.
I hope that this conversation incites a lot
of our listeners and viewer to go back to
your very interesting and stimulating article
and with that I would like to thank you.
Great, thank you.
Nice talking to you.
