[ Music ]
>> My name is Mohamed Mahfouz.
I'm the program director
of the Biomedical Engineering
University of Tennessee.
I'll take couple of minutes
to explain a little bit how
biomedical engineering doing
some anthropology work.
The story comes, I moved
to Tennessee in 2003
and I was approached by a number
of students of Dr.
Richard Jantz.
And they were interested in some
of the work we were
doing in biomechanics.
My background do surgical
navigation, implant design,
until now, plus what we're
doing with anthropology.
So I introduced them a
little bit to the concept
of statistical outlets from
our point of view, the clinical
and biomedical engineering,
and they're very interested
and we started huge
collaboration
from this point of
time till now.
So that's a quick
introduction to my background.
So today I'm going to give a
quick outline why we use 3D,
and of course in the morning,
Dr. Ross explained the 3D
and their methods of using.
We're doing very similar stuff,
but in a little bit
engineering different way.
So the CT principles and then
the calibration, going to talk
about the modeling and
some of the applications.
And tomorrow we'll
continue more in our part
of that facial reconstruction.
So why 3D?
Basically if you look at the
literature, and by the way,
that's not only in anthropology
but also in plan design,
industrial, whole
industrial medical device.
We're not actually
into 3D analysis,
but they would do
things like calibers
or the make measurements
from X-rays was very common.
And I've seen early even
anthropology papers doing a lot
of work from x-rays.
So x-rays come with
inherent problem, of course,
which is basically that dense
object can shadow structure
behind it, and of course
no depth perception.
And if you even don't know
how the x-ray was calibrated,
projection changed by the
taking the object relative
to imaging plane.
And this is a problem
I discovered
in the clinical world.
I'm going to talk
about it a little bit,
which when you have even
anatomical like the hip,
you have different angles
between people, anteversion,
retroversion, go
up to 25 degrees.
So this even if you calibrate
the X-ray and you want
to make measurement from
it, still have problems.
And of course, limited
soft tissue information.
And this is kind of
an example saying here
that you can have an
internal structured
and you actually
cannot identify easily.
In some of my work, and
I'm going to every now
and then explain
what I did before
so you can get a good
background about my work.
So some of the work that
I did and probably some
of you have heard about or not,
but it was a design
of the gender knee.
This is my design and
the gender hip by Zimmer,
and some of the osteoporotic
hip stem in that.
That was part of the work
when I was initially working
with surgeons and I was
on a developer team,
I found them using a lot of
measurements of what we call
in anthropology the neck
angle, we call proximal angle.
And I have found them using
a lot of measurements,
and I was a little bit
disturbed because I know from 3D
and by the time we scanned
all the vast collection
that this big variation
on the neck angle,
in 3D and also on
the orientation.
And all you can see an
example here of the x-ray
in a female case, where you
measure it from the X-rays
around 142 degrees,
but in reality
in 3D measurement it's 130
degrees, 12 degrees' difference.
And the same for
men, even 13 degrees.
So we did a very
interesting study
where we did actually
synthetic analysis
where we rotated a
bone some degrees
and then we were measuring
for example the IM canal
and how the IM canal varies.
You can see you rotate the bone
and then you start getting
different measurements
from the projection.
And the active rotation
where this time we're really
concentrating on IM canal
because of the size of
the hip stem and all that.
So we can see that the errors
in rotation we recorded
that we have different
measurements,
and that this observation
came and I've seen them
in the operating room where they
make measurements from the x-ray
and then they come and put
size, the size doesn't work,
and then they move
usually a different size,
larger that is always missed
trial and error in this process.
And this is what grabbed
my attention at the time.
And of course if
you use calipers,
calipers you can have -- in the
morning we heard this, the intra
and inter-observer error, and
we've noticed this a lot in some
of comparisons, comparative
studies that we did,
and also can be used
for rigid measurements
or linear measurements
and limited access
to anatomical features.
And then internal marker,
landmarks like if you want
to look inside the craniads is
very difficult unless you want
to do autopsy, an
unlimited number of landmarks
and time consuming; that
even if you add digitizer,
still digitizer will
give you more points
but you still have some
of these limitations.
Of course, this is kind
of a slide showing you how you
do measurement by putting sand
but actually can do this
in way with a computer
with a little bit of
computer engineering.
You can get the same results
with actually higher accuracy.
Now, what's CT, and probably
a lot of you know CT very well
but I'm going to explain
here how we use CT,
and also when we use CT we come
with another set of problems,
and I want to explain
it a little bit
on how we came around
this problem.
So of course, the CT, it's
an X-ray device and capable
of imaging cross-sections
and creative stack of images.
And each represent a slice,
and then this stack can be
arranged to create a volume.
We have a lot of CT applications
and specifically in my area
of implant design and
biomechanics we apply this
on the knee, on the
lumbar, on the hip,
and actually cervical
spine, too.
And some of the CT applications
with this, to be clear here,
visualization, is
very important part.
Some of this is showing
actually a volume rendering
visualization, and
this is done routinely
if you're doing some sort
of scans and you look
at the workstation
of the CT machine,
where actually you can --
some of these are
actually volume rendering
with some transfer function
to look at specific,
whether it's bone or soft
tissue, and it's good
in our case sometimes that
we actually do segmentation
and then compare it
with a volume rendering
to see how good the
segmentation quality.
Now, the CT history is
probably a lot of people know,
but actually Sir Hounsfield
and Professor Cormick,
they started this and they
were awarded a Nobel Prize
in 1979 in physics.
And it's the first time to use a
complete digital reconstruction
of images completely
mathematically.
The whole idea is very simple.
You have an X-ray source and
then you have an X-ray detector
and this is nothing changed
except the technology advanced
better than the earlier
CT machines.
Now, the basic idea you
have to 2D views projections
at all angles around
the patient,
and then you have a
rotating source and detector.
The attenuation at each
detector is basically
that what we measure, and these
projections are taken together
and they are different type of
geometry which I'll explain.
Like, you have a parallel
and you have a fan and cone,
and I will tell you
the difference.
So parallel will one of the
very earlier CT machines
where you have the
X-rays coming in parallel
and the projection
on the other side.
And then also mathematically,
the slices,
cone and constructed fan beam
is exactly as the name suggests.
It's like a fan beam.
You have a point
source and detectors.
And then the latest,
which in actually not
in even the fourth generation
CT but the fifth generation CT,
will be used a cone beam.
A cone beam is more, if you hear
about small animal experiments,
we put the entire
animal and do some --
take the entire CT of small
animals, you use cone beam now.
We use the cone beam,
which is basically look
like that exactly, on a
spiral CT to get it very fast
if you can have -- very fast
so you can have applications
where you want the
patient to be motionless
and even cannot breathe.
So that's the fifth
generation that's coming.
And of course, when you
put the slices together,
this is a presentation of how
when you have one
slice it's pixel;
when you stack them
together you create a volume
which is a voxel.
Now, the CT construction
will go the attenuation data
from rotating the
source and the detector.
This is basically projections of
the attenuation, and they come
in very small segments
and they go
through a mathematical
transform,
which basically this is how they
look on the right, the cenogram,
but when combined together
mathematically they create the
reconstructed image.
Now, some of the CT numbers
that we use, it's very important
to understand these basics
because I will explain later
why it becomes very handy
from a scan to scan.
So typically a CT
slice is 5/12 by 5/12,
linear attenuation coefficient
measured between the tube
and the detector, and the
attenuation coefficient measured
how X-ray absorbed
with the material.
So of course, different material
have different absorption;
the size of patient also
can have some effects.
The values in Hounsfield
units, and that's how we use it
so the CT number, which
is Hounsfield unit,
is basically the attenuation
of the material minus the
attenuation of the water,
and multiplied by
a solvent factor.
So CT is different a
little bit than MRI
and other imaging modality
from the point it has a
very large dynamic range.
Dynamic range here we means that
you have so many gray levels,
and it depends on up to
for example, 47 even more,
of different gray levels,
from black to white.
So this dynamic range
sometimes can affect
or can not cause problems but
basically when people work,
they like to adjust
the two parameters here
which we call the window
level and the window width.
By adjusting these
two parameters,
if they are not careful
they could wash away some
of the soft tissues if you're
looking for soft tissues
or they can wash away
some of the bone,
and these two parameters
can be present here,
like for example you
have the window level
and the window width, you
can see if you change,
and the window width is
basically the dynamic range,
so if you reduce or increase the
dynamic range you can actually
wash away some of the details,
and that's why you need
to be careful and I'll
explain how we get
around these problems
in the scan.
Plus, working on the images
you have other problems, too.
You have the CT parameters
that you need to keep track of.
So you have acquisition
parameters
which determine projection
of scan dataset,
and then you have
the reconstruction
parameters themselves.
Acquisition parameters,
basically that's
where the CT technologists
come in,
and I wish that it's standard.
Every machine is different.
So you have some of the
important parameters
like the voltage between the
cathode and the voltage measured
in keV volt, a higher
potential accelerates electrons
and thus increase
the rate energy.
Tube current, it's another
important parameter,
which is the current
flowing through the cathode
and measured milliampere, and
logic current increases number
of electrons and thus
increase beam intensity.
So you have energy
and intensity.
And in both cases,
this is how now,
most of the new systems would
adjust sometimes automatic
with the size of the patient
for the quality of image.
You also have other things, the
scan time, time taken for tube
and detectors to perform
complete rotation.
Longer scan time
increases total X-ray count,
and then also the
colonization was the size
of the slice thickness along Z
axis where the motion happens
of the tube and detector.
And then, of course
beam filtration,
which is different
beam-shaping filters optimized
for different examination,
and this is one
of the proprietary
areas between companies,
which you can see I'm
going to do a scan
for you for musculoskeletal.
What I'm going to
do, optimum scan.
They have different parameters
that it's not easy to know
from one machine to another.
Reconstruction will have the
field of view, which well,
if you did a CT before
they call it the scout,
which basically they are just
quick scan of the overall
and then they adjust the window
for the vision they
want scanned.
And as I said,
the reconstruction
metrics, 5/12 by 5/12.
And then you have the
reconstruction filter,
and again, this is where
different filters available
for smooth to sharp -- I mean,
also we have different filters
in the reconstruction.
Now, this is to create
this, basically,
the different configuration, and
this is kind of very general,
but some manufacturers
don't work the same.
I tried to replicate from GE,
with Siemens I had some trouble
because of the milliampere.
They have different,
different values.
So I'm going to say later
how we solve this problem.
So we have the third
generation of CTs,
and I don't think
any of them exist.
If they exist, they called --
used to call them at
the time CAT, actually,
Computerized Axial Tomography.
You don't hear this name again,
which was basically
single detector
and it translates
linearly and you have --
and rotates around the
patient and it takes very slow,
takes time, and the past we used
to have misalignment
between the slices.
I don't think any
of this exists now.
The second generation where
this starts using the fan beam,
10 degree multiple detectors,
so they had better, and the size
of the sensor, the detector
increased a little bit
but still slow, took
20 seconds per slice.
The third generation, this is
where they start
including the helical scan
and were multiple angle
acquisition at each position,
much faster as you see,
.5 second rotation.
And then the fourth
generation CTs
where they have fan beam static
detectors all around again
through larger number
of detectors,
which detect the x-ray,
and only two rotates.
Now, it's not included here, the
fifth generation's cone beam,
and that's the greatest
now scanners.
Now, how to solve the problem
between different
vendors or manufacturers?
So we came with this
idea long time ago;
I can't really keep track
of all these parameters
from machine to machine.
And the problem was
this: You can find,
if you're not keeping track of
these parameters you can end
up with the problem
I mentioned earlier.
You can wash a little bit,
you add a little bit;
it's not consistent.
So the best way is to
take these parameters
out of the picture
completely by using --
and we did this constantly
by using calibration phantom,
the same calibration
phantom that they use
in calibrating a
CT machine we use,
which is basically
something look like that
which have a sample of tissues.
In this case I had the phantom,
which still have this phantom,
runs around 5K, which
basically has a similar prop,
just to human tissues,
this prop in these tubes.
They have cortical bone,
they have trabecular bone,
they have lung, same
like lung tissues.
They have water and they have
I think interstitial fluid.
So what we did initially,
we used this phantom
for some completely different.
It was not anthropology
application but was more
of knowing the material
and bone density properties
of some of the scans.
And we put it with the same
scan that we are, was a specimen
or the patient or --
so basically we scanned
in the same time.
And as you can see,
the three outline here,
you have the cortical
bone is that one
and then you have
the trabecular bone
and another kind of material.
Now, our initial experiment
was to look at cadaver --
here's a cadaver and you can see
that line going across
the cadaver.
This is a profile.
If you look at this profile
that shows you the lower curve
on the left, shows you
the intense deep profile.
And with the same scan
we did the cadaver --
sorry, the phantom,
phantom on the right.
So we knew exactly because
we know in the configuration
of the phantom where
is the trabecular bone
and where is the cortical bone,
and basically we did
actually map the density.
So we know the density of this
materials and we mapped back
into the scan and we
had a density map.
And this work was important
for implant design also
when they want to see how
the stem will interact
with different tissues.
Now, we use it for
anthropology from the point
of standardizing the scan,
so if we use the phantom
with every scan, now it
doesn't matter if I'm working
with Siemens machine
versus GE versus Toshiba.
It doesn't matter at this point.
It can keep record
of the parameters,
but becomes irrelevant to us.
So the part of the modeling here
-- I mean, we did two methods,
or two methods really
exist right now;
directly find landmarks
of perform measurements
on CT slices, and
some people do that;
generation of 3D surface
model by segmenting objects
of interest, then
performing measurements
on the segmentation model.
And that's the method we
use with a lot of people,
and this method is
very similar in its --
this big similarity was
the methods mentioned
in the morning.
The modeling directly from
CT slices was a problem,
although you do it a
lot in clinical work.
But what we discovered in this
example, like if I'm going
to look at the epicondylar axes,
or we call it transepicondylar
axes in knee, which is the axes
that goes between the medial
and lateral epicondyles.
And if you measured
vertically from the CT,
and you're not aware that this
axes could come into number
of slices here what happens
is in the green here,
it's not in one plane.
And some people make
this problem
and take this measurement
as if it stands,
but they are taking the
projection of the slide.
So it can matter for few
millimeters; that's why you have
to be very careful if you are
making direct measurements
from CT slices.
The other approach is to
do using segmentation,
and then in segmentation you
apply it in statistical atlases,
and then this can help us in
reconstructing missing data
and then can actually help
us in the measurements
and landmarking, and then
we can run different types
of statistical analyses.
Every one of these steps we're
going to explain right now.
So what is a 3D model?
It's basically a
CAT representation.
And it is number of what we call
elements or triangular elements,
and all these elements are
connected together number
of points, so this is the
presentation, the vertices.
So if I have these vertices that
are presented, it's no different
than doing the digitize it;
I'm coming into a
different vertices, the same
but here I have more
information.
Now, one of our initial
applications still going on,
is to actually use
it in segmentation.
So this is scanned from
our spine application.
You can see the 3D models.
This is a CT.
Since the scan was
done for the lumbar,
it kind of has a little bit poor
quality because that's effect
of reducing the radiation
on the patient.
This is live patient.
So you can see here that we
can actually have a very good
decent, segmentation.
So the segmentation, whether
it's done automatic or manual,
our case we do both,
so you find a CT image.
And here's the problem
I mentioned earlier:
If you're not aware of the --
if you didn't do calibration
and you don't have a phantom
and you change the windows,
you can wash out some
of the details there.
You have to be careful.
So you have that
contour, then the region,
and then basically you
can recreate the model,
the bone model and then
you can have a 3D model.
This is some of the earlier
segmentation we did for entire.
This was for a cadaver I had,
and then basically showing the
quality of the segmentation.
Initially we started manual;
now we are doing even
automatic segmentation
with the help of
statistical atlas.
So what was the statistical
bone atlas?
What basically what
I'm talking about here?
It basically generates
surface mesh models built
from computer tomography data.
These new models are
converted to normalized mesh,
means that when we build atlas
we make sure that the number
of points distributed
properly, and we have control
over the number of points, and
then models then represented
with principal component
and principal component analysis
is not using the context
of measuring statistics
but as a data reduction,
because if you start building
these statistical atlas
to use it later in
analysis, they become so huge.
So principal component,
one of statistical --
used a lot in anthropology,
used for data reduction
but at the same time capturing
the statistics and the modes
of variation of a
population or of even a bone.
And then average
bone can be computed
from all this bone together,
so like in the morning we've
seen the progress is taking
average vertices.
The same here, you have an
average bone which we used
to call it a template.
Now, this is an example of how
the use of statistical atlas.
So you can have your template
bone which is the average,
red one, and you have a bone.
You have a crania that you want
to add it your statistical atlas
so you can run analysis
on it later.
So the entire two models will
have some original registration
where the two cranial
models will be aligned.
So remember the red here in
this case is the template bone.
And then there's something
happens here we call surface
correspondence, and this
is a huge point where,
we explain in the next slide,
where you have basically think
about this way, the
template model is a kind
of a deformable model,
will change its shape
and become the other
one exactly.
By doing this you captured
all the statistics.
Of course a lot of
mathematics behind
which we've been for years.
By doing this step you added
the other bone into the database
or statistical database,
without actually having
to do anything on its axes.
So basically by this template
deforms, captures the statistics
and then basically you
have your new bone is added
and at the same time all
its statistics captured.
And we tried to get some part
of show you how this happens.
So if you have the new
mesh, in this case --
I apologize for shifting the
colors, but this is the new bone
that you want to add, and
your base matches the green
as a template.
So basically what happens here
is you have the new bone coming
because someone did it.
It may come from
different source,
not your lab, another lab.
Someone did segment the bone
and did segment the skull
as solvents of different points
while your base template may not
have the same number of points.
So usually what will happen
is you will have number
when we put them together to
register, you will have number
of vertices competing
on the same.
So what we do is we do something
called mutual correspondence,
which is basically
redistribute these points
but the redistribution
comes from the base mesh.
So these three bones will be
average and then the closest one
at a place many to one
relationship was the mean,
and then that's iteration.
And then we take the
base mesh and then we try
to do the same thing but
in this case try to come
to the closest point
on the new mesh.
And then along with weighted
combination of vectors,
this process can
reiterate back and forth
until the entire vertices
distribution of the base
of the template becomes
very close with a threshold,
very small threshold, to
the new bone or cranial.
in this case.
So by doing this, we
actually capture this cranial
into our statistical atlas.
Now, this is where we differ
completely from the mesh method
that was described in
the morning, completely.
In this case we have
a full control
because now we have full
control of the vertices
and we can make all the
measurements we want.
In the future extraction we
divide them into two parts,
part we call the
statistical, the shape model
and the specific landmarks.
And we call them global
because the shape is global.
The shape will be very similar,
so that's a global of the model.
And then the specific landmarks,
this is local measurements.
All right, so the statistical
presentation, landmark-based.
The disadvantage,
manually define landmarks,
time-consuming, high
interobserver error.
And what we're saying
is by doing these steps
and we will explain tomorrow,
we can automate this
entire process.
So we don't have
really to go and try
to find the different landmarks.
It's completely automated.
But you have at the beginning,
of course someone
need to go and check.
You have to still, the
anthropology people will come
and check to see
whether we have errors
or whether it's working
properly.
The surface base is the
advantage perform entire bone
surface after establishing
correspondence --
correspondence the keyword here.
And in this case we don't have
21 or 71, anywhere we want
to measure we will measure.
Quick and convenient
method, animate subjectivity
and only done once -- the
point I forget to mention.
Have 500 bones, 500 skulls,
like some of the
studies with NIJ we do.
Do I go and measure
every one of them?
Then that defeats the purpose
of what we're doing here.
The whole idea here, is we
can do the measurement once
and propagate against the
500 bones or 1,000 bones.
That's the purpose here,
and basically can go
back and then look.
Because once you identify the
marks you want you can propagate
the measurements.
Then you can go and randomly
select and bring for example,
someone and make
comparison measurements.
Because if you do this and
you have solvents of crania,
that can take forever.
Some of the earlier work that
we did, and also a good part
of our doing it this way
that you can identify
area was high variation.
A number of PhDs came out of
Tennessee using my methods
in identifying clavicle,
some areas of the bone;
some people did analyses
on Lucy bone.
A number of our people were,
and this is examples where --
the color code tells
you the high variation,
so after doing this you can rank
the population and look at areas
for high variation
and then you can look,
go after these areas
and study it more.
One of the earlier things we
did when I start my relationship
with Richard Jantz is
he brought me this bone.
This bone was discovered washing
in the river in Tennessee,
and the police didn't
want to investigate.
And he said, what
can you do about it?
So make your estimation.
So I took this bone and we
did the CT scan and we came
up with the right estimation of
the height and even the gender.
I told him this is a female, and
turn correct again at the end.
The DNA test was done, was
confirmed and case closed.
This was done 2003-2004,
my first interaction
with anthropology at UT.
Some of this work would
be easily actually
and we're doing right
now is use it
with fragmentary
bone reconstruction.
And here what we have
in cases like that
where you have missing bone, now
if you have a statistical atlas,
it's very important when you
have statistical atlas is I
won't use the correct terms.
You don't call it race or it's
ethnicity what you call it --
I forgot.
>> Ancestry.
>> Ancestry.
Okay, so to build
[ laughter ]
-- if I use -- I think I was
writing clinical paper the other
day, so they moved everything.
I put about anthropology
and they move it back
to gender or race.
So we kept it the way
they want, clinical.
But actually funny thing
is we removed ancestry,
removed all this.
But anyhow, that could help
in really filling the
missing information
and fragmentary bone.
And there's a big initiative
going on in my group
and with some people at
anthropology, Richard Jantzen,
Matlian, working through
fragmentary bone is try
to use this as a template.
We did some work before, Adam
and I worked, Adam Sylvester,
we did this like he was very
interested in these methods
and he brought samples of Lucy
Australopithecus afarensis.
And we actually did
work very well.
We published this paper
using the same methods
in paleoanthropology, though it
wasn't my interest at the time
but I got very interested
in this.
One of our clinical
applications, very important,
and here is the work
of Dr. Emam, here,
is pelvis reconstruction.
Basically why not.
In clinical I have a
severe problem with people
that already their pelvis
completely deteriorated.
They can't put implant for them.
They can't put cup.
They can't.
I mean, it's something
like that.
So we have also big work, and I
did put some patent application
on some of the software here, is
how we can really reconstruct,
build a cup for the patient
and [inaudible] process.
Because they are in a very
bad shape, and only a handful
of surgeons in the country they
do these kind of surgeries.
And here is the process.
You got the training
data, you got the atlas,
reconstruct the pelvis,
you get a new patient in,
you defect classification,
of course landmarking
and measurements and then
the statistical atlas.
But basically this is how
you construct kind of a cup
or an implant for
this severe cases.
We did our other grant also
was automatic measurements
of the skull.
Our results right
now and we publish --
we'll publish later
in the fall results,
is we did increase
the sexing for males
and females, 98 percent.
I'm not sure what the
high is right now.
That's independent
of any other bone.
That's only diagonal
on the cranium.
And also we are taking the
thickness measurements.
The other work is not the
purpose of this workshop,
but they can overlap,
because part of our work
in increasing this
percentage of discrimination
and basically including some
internal features and trying
to project it with x-ray.
So if you have a skull that
you don't want to destroy it
but you can relate some of these
parameters together and come
up with the proper
identification.
Now, we did previously
work on the patella,
and we published this work
in International
Forensic few years ago.
We took 45 different
measurements on the patella,
and basically this is
the paper published 2007.
And then the results were I
think 93 percent in some cases,
was additional information
was 96 percent, where we had
like here, you see the
confusion, metrics --
matrix where you had predicted
correct 28 and 2 wrong
and were predicted
44 males and 3 wrong,
and that basically was
to test the software.
And that's basically
cite some of our work.
Have any questions, please?
>> I have a question.
Could you please
explain the phantom?
Is it like a control?
>> It is control, absolutely.
The phantom has -- I
can go to the slide.
The phantom basically --
this is a standard phantom
that technologists use in
calibrating CT machines,
so they want to see the
intensity, the energy,
so they want to calibrate.
But it is basically -- I
wish I brought it with us.
It's kind of a platform,
has these tubes.
These tubes have material
similar to soft tissue.
So you can run it with a
scan, actually, with a scan.
Here how it goes.
You put your scan
specimen with a cadaver,
even with live patients
I use it, too.
You put your cadaver, you
put your bone, whatever.
And the technologist will play
all the parameters I mentioned,
intensity, the window,
all these parameters.
And once is done and
the scan goes on,
it will be applied
all to the phantom.
So now you have your sample
and the phantom together,
so it's kind of control and
takes away that you have
to worry about doing the same
scan with a different machine.
So you go on another
machine, use a phantom.
So in this case you
don't have to worry.
Now we can adjust intensity
level then like this curve,
so you adjust intensity level,
then in this case you normalize
the scans between the machines
so you don't have to worry
about all the parameters.
>> For the automatic
testing of the landmarks,
are they then moving towards
application of a robotic program
that anthropologist can use?
>> What we're doing right now
in the grant that I mentioned
about the measurements,
we will put some data.
It was not required to
transition our software,
but we actually applied
for a continuation.
Where we put a complete
software,
like 4 but something
like the 3D.
So basically similar
to what you have,
but it's not limited only
to the digitize data.
You can do it every -- if you
have a scan, you bring it in.
It doesn't matter.
And actually we put a full
complete proposal to do
that and have it online.
I will have some elements
online but not the full-blown
because it's kind of
a big application.
>> Question.
Mohammad. How many
anthropological differences
in slide 49.
You have low variation
and high variation.
Do you have numerical
values for that?
>> We have numerical values.
This, we always put this legend
here, so I'll come to it.
[Background noise]
>> When we do it is
based on ranking.
Some of the work if you know
Fisher's discriminant ratio,
that's basically how it
ranks, the variation.
So it's easier for us -- we have
numbers, but it's easier for us
to say, okay, point
to me the area
where you have the
highest variation.
So for example, the high
variation, you look at datas
and it comes very consistent,
like the epicondyles,
you look at the female here.
This is where we see
actually the most variation
between men and women.
And the Fisher discriminant,
is a discirminant analysis,
you can do it for example,
between males and females.
But multiple discriminants, you
can do it across population,
different ancestries,
I was going to say,
[ laughter ]
ethnicities, population.
So you can do this
across multiple discriminants
a little bit
like say the general case of
Fisher, which you can compare
between two different,
two groups.
>> I have a Question.
If you are reconstructing,
like on the one
that you had a female that's
missing a disorder, like a femur
and you reconstruct that
and that's a high
variation how do you choose
that reconstruction parameter?
>> The reconstruction's
done automatically.
Here what happened in this case,
the only work we
have is partial.
We have the proximal.
The distal was completely
missing.
But the atlas you can
divide it any way you want.
So for example, you can have
an atlas for the entire femur.
We have for the entire femur.
We have for the distal
by itself.
We have for the proximal.
And you can look at the shaft as
a low variation, a little bit,
unless you are looking
at the isthmus
and looking at the IM canal.
So we know that every
atlas has its own weights,
so we build from
statistical inferences.
We can't actually predict --
the proximal part has ways more
information than the distal.
So we can't have a good
precise prediction.
Actually, we came from this
prediction was the exactly
height of this woman was 5.2
when we estimated was correct,
actually from records.
So multiple -- you can't
look at the bone as one bone;
you can divide it
into multiple areas
and then you can have even
multiple statistical atlats
on different parts of the
bone, like distal, proximal.
In the pelvis for example, is
divided into a number of areas.
Same for crania.
That's how we give weight.
So it's weighted that okay,
from our work we can predict the
entire bone from either partial,
from distal or proximal
or part of the shaft.
Now, of course that will --
it's completely different
when you have the full femur,
which you have more information,
but can give you good idea
up to what, 98 percent
of the bone.
We have a very good, decent
results on long bones.
I think we're beyond
the long bones right now
because we did so, a lot
of work on long bones.
And I design implants.
I did everything
with the long bone.
We're moving into the more
complicated areas like crania,
lumbar, pelvis, is
really difficult,
where you have the bones more
complicated in structure,
so it's not long bones.
>> So, Mohammad, on
this long -- this femur.
So then what equation do
you use to determine 5.2?
>> I use the best
equations, same equations
in the morning, the
same one exactly.
>> So, did you get a range,
not just an exact stature?
>> Yeah, range, no,
it was range,
but what we did is we
came up with the height,
I mean the lenght of it.
We plugged it into the
equation, so I'm not working
on prediction; I'm using
the existing equations.
But we are predicting
the 3D structure.
Now, the trick here is
you can say what kind
of ancestry; is it European?
So you can play with different.
But now, for us we have a good
understanding of the European
and black and, so
we have good --
but this took years
to come to this point.
>> Think your questioned
out for the day?
>>Thank you.
