Hello.
Welcome to the Ask Me Anything
about Empower live webinar.
We must apologize we've had
some technical difficulties
with the telephone and
also with the system
that we use for the webinars.
So we really apologize.
We appreciate your
being with us,
and we know that your
time is precious.
So please accept my apologies.
So let's go ahead
and get started.
And if you have any
difficulty seeing or hearing,
please send in a
question using the Q&A.
And we will do our best
to rectify the situation.
So I'm going to start out
by answering some questions
that we received as part of your
registration for this event.
And then hopefully we can
get to live Q&A. Again,
we've had a pretty tough
time here this morning trying
to get everything to work.
So worst case scenario, if
we can't do the live Q&A,
I promise that I will
answer every question
and send each of you an email
with answers to your questions.
So no worries about that.
Alright, just a couple
of housekeeping notes.
Again, if you have questions,
use the Q&A feature to submit
questions during the webinar.
We'll get to as many of them
as we can during the time
that we have here together.
And that also
hinges upon my being
able to share my
screen with Empower.
So we'll test that
out in a little bit.
I see a question
is coming in now.
Alright, when we're
done, you're going
to get an email with a link
to the recorded webinar.
I've also provided a
very detailed slide deck.
It's a presentation that I made
at one of our [inform] meetings
a couple of years ago.
It goes into the details of
how the additional algorithm
and how the ApexTrack algorithm
works for peak detection
and integration.
So I thought you would
appreciate having that
because it goes into a lot
of the detail, the math,
how things work, and why
things happen the way they do.
Some of that, we'll
probably address here
during the webinar, alright?
And again, any questions
we don't get to,
I promise to reach out
to everyone individually
with an email and
answer your questions.
So just a quick introduction.
For those that don't
know me, I'm Neil Lander.
And I've been with
Waters 25 years now,
which sounds like a lot.
But the gentleman that hired
me has been here for 52 years.
So I guess it's all relative.
Started out in the field
doing field technical support
and migrated to our
corporate headquarters
to work for the
customer training group
and spent quite
a number of years
there doing customer training
and managing that group.
And then about five
years ago, I moved
into the Informatics
marketing group,
where I look after Empower.
So let's not waste
any more time.
Let's get right into it.
The first question is a
really, really good question.
And it's actually
two questions in one.
So let me just
read the question.
And then I will
give you my answers.
So the first part
to the question
is, are there challenges
about using MS data
for quantitative methods?
Second question,
how do you account
for the less than ideal MS
peaks and integrate other
than smoothing?
Now, I'm going to take the
two questions in reverse
because the first one could
be quite a long discussion.
So here, you can see, hopefully,
exactly what the person that
asked the question --
you can see, in this case,
this is some QDa data.
And if I use ApexTrack
in this case,
and I set my start and
stop and I click Integrate,
it optimizes the peak width
and the detection threshold.
Well, you can see the
peak looks rather poor.
And it thinks it's three
peaks rather than one peak.
So of course, as the person
asking the question said,
other than smoothing, what
can we do in this case?
Because here is the same
peak with smoothing.
So what you could
potentially do,
if you didn't want
to use the smoothing
function for some reason, you
could increase the peak width.
So the peak still has that same
shape as we originally saw.
But if I increase
the peak width,
then it sees it as one peak.
And it's integrated and
it's detected as one peak.
And I can adjust the
base line to integrate it
the way I'd like.
So it's a great question.
And I think let's take a step
back and explore this question
because it is a
very good question.
I've seen data from optical
detectors that look poor.
It really does depend
on the application.
And it depends on
the signal to noise
ratio and the concentration
and so forth and so on.
Because again, I've
seen chromatograms
with peaks that look like
this from optical detectors
because they're very,
very low concentration.
So I think, as far as the peak
detection and integration goes
and smooth but increase
the peak width,
you may want to use a
combination of the two
to improve the peak shape,
so that you can do something
quantitative with the peak.
Now, what about the first
part of the question?
The challenge is using MS
data for quantitative methods.
And that's a very good question.
I think it does depend
on the application.
And I would say
that about anything,
whether it was optical
detection, MS detection.
Whatever it happens
to be, it really
does depend on the application.
I think what's happened,
over the years,
is with optical detection,
let's just say UV detection
in particular, we have come to
expect nice linear calibration
curves over a fairly wide range.
We've come to
expect percent RSDs
on areas for
replicate injections
to be quite low, 1% or less.
As the technology for HPLC has
progressed from HPLC to UPLC
and as the column technology
stationary phases has improved
and we've improved
our peak shapes,
we are able to get quite
good reproducibility
on replicate injections
with optical detection.
Now, that being said, we also
have to be aware of Beer's law.
Because if we over
concentrate something,
then we can lose that
linear relationship
between concentration and
response on a UV detector.
And if I look at MS detection,
in some applications,
and I'm going to preface
that, some applications,
I see better sensitivity using
mass detection and a wider
range for the calibration curve.
But in other cases, I've
seen UV detection better.
It all depends on the compounds.
And one of the things
that can happen
is that you can saturate the
source on an MS detector.
And when that happens, you
lose that linear relationship.
So I have seen some applications
where the individual did
use a quadratic fit for
the calibration curve
rather than a linear
fit, which is what we're
used to with the UV detector.
So in conclusion, it's
a really great question.
And it could be a
whole hour's discussion
just on this point alone.
It really depends
on the application,
depends on how good
your system is.
Meaning, how clean is it?
Absolutely use the highest
possible quality solvents.
Be very careful not to
contaminate anything
in the system.
Because if you have any
column bleed or something
like that with the
MS detection, that
adds to the background noise.
And that's going to
reduce your ability
to see low levels of compounds.
So I hope I've
answered that question.
And again, you could have
a whole hour's discussion
just on that topic.
So the next question that we
received, also a really great
question, how is
auto integration
different from
manual integration?
Traditional integration,
is it any better?
So I think let's take
a step back and define
what do we mean by
auto integration
versus manual integration.
Generally speaking, when we
talk about auto integration,
we're talking about
setting up a processing
method with the appropriate
peak width threshold,
minimum area, minimum height,
whatever it happens to be,
maybe some integration
events, and then applying
that processing method
to all the samples.
So I set up my
processing method.
I take the sample set.
I process the sample set.
I get a result set.
That, to me, is
auto integration.
We're letting Empower
use the processing method
as we've created it to
generate the results.
And here in this
slide, you see this
happens to be with ApexTrack You
can tell, well, because there
is integration
algorithm, ApexTrack,
but also, you see in
the global parameters,
you've got a section
for detection
and a section for integration.
So remember with
ApexTrack the detection
of the peak and the
integration of the peak
are split into two
separate functions.
They're decoupled.
So Empower takes the second
derivative of the chromatogram.
And from that second
derivative, it
determines what peaks are there.
It detects peaks.
Separately, using liftoff
percentage and touchdown
percentage, it
draws in baselines
and calculates
areas and heights.
So if we talk about
manual integration,
hopefully, you can
see in the slide,
if you look at the Integration
Type field, any letters that
are capital letters or whether
it's a capital B and a B,
that means Baseline
to Baseline, or it's
a capital BV,
Baseline to Valley,
that's the result of
the auto integration.
If I manually move a
peak start or a peak end,
you'll notice that the
letter goes to a lower case,
as you see in peak
number six in the slide.
So that indicates that I've
manually, in this case,
moved the peak stop
or one of the peaks.
So that's manual
integration, where
I either, A, move the peak
start and/or peak end,
or, B, maybe there's a
little peak in the baseline
that the auto integration
didn't pick up.
So I draw a baseline under that
peak to manually integrate it.
Now, the other part
of the question
is, ApexTrack
versus traditional.
There's really no difference,
in this case, alright?
Here is the same chromatogram.
Only in this case, I developed
a traditional method.
And again, you can see it says
traditional in the upper right
corner of the slide.
But here, you see
the difference,
peak within the threshold,
min area, min height.
Those are the global parameters
you have to deal with.
So the traditional
algorithm couples detection
and integration together.
And you cannot separate those.
So in a very brief
statement, that's
the difference between
the two algorithms.
There's obviously a lot more.
But again, if I use the
traditional algorithm,
I create a method, process the
sample set, get a result set.
To me, that's auto integration.
Manual integration would be the
same thing, where I've gone in
and I've moved either a
peak start or a peak end.
Or again, if I had
a small peak, I
could draw a baseline
into a small peak
that was not detected
and integrated
based on the auto integration.
So I hope that
answers that question.
Alright, here's another
very good question,
the subject of impurities.
Actually, taking a cue from
that last question, when
I look at a lot of
methods, let's say
I'm just doing an assay
and I'm looking at an API,
I usually have a big,
clean, well-separated peak.
And auto integration works
pretty well in those cases.
But then if we look
at impurity methods,
these become a little
more challenging.
Because now, you could have a
lot of very, very small peaks.
And they may be fused together.
And the baseline could
be going up and down.
And you might even
have a big API peak
in the middle of everything.
So how do you deal
with those things?
So this is a great question.
When we need to calculate
the impurity percentage,
how can you choose the
best integration method
to integrate
standards and impurity
sample using the same
processing method
because the amount to
the standard solution
is very high compared to
the amount of the impurity?
So when I use the same
processing method,
I do not have a
good integration.
So here is a great example.
And let me just go right
to this next slide.
OK, well, let's
back up a second.
So there is my API
peak, a big peak.
But when I zoom in
on the baseline,
now I can see I have
all the impurity peaks.
So this is a great
example of how I
can use the integration events.
And that's that table underneath
the global or universal
parameters.
There's that table
underneath that I can use.
So when I'm setting up my
method and I set my peak
with my detection threshold,
my liftoff percentage,
my touchdown percentage, my
minimum area, minimum height,
whatever it is I'm
setting up, that
may not work for
all the samples.
It works great for standard.
But here's a sample with
lots of little peaks.
So if you look in
a screen capture
here from Empower,
what I've done here
is I've used the timed
events to adjust things
like the peak width.
And actually, that's the
one I've used primarily
here in this case is to
adjust the peak width
at different points
in the chromatogram,
so that I have good peak
detection in a chromatogram
where I've got lots
of little peaks
along with a great big peak.
So my recommendation
would be, once you've
got the initial method set up,
take your worst case scenario
like this, where you've got
a lot of these little peaks,
and then you can use
the set peak width.
You also have set detection
threshold as an event.
You have set liftoff percentage,
set touchdown percentage.
You've got all of
those different events
that you can work with to try
to optimize your peak detection
and integration parameters.
So this one is also
a very good question.
Now, I'm going to
answer this in two ways.
The first thing is because
when we look at shoulders,
there's a couple of things
we could think about when
we say shoulder peaks.
Because the classic
question that I've
had over the years and,
every once in a while,
I still get it today is
I've got a small peak riding
on the tail of a big peak.
It may not necessarily
be a shoulder,
but it could be
close to a shoulder.
And what's the best way to
integrate the two peaks?
So in this slide, in the upper
left-hand corner of the slide,
you see that we've
got two peaks.
They're not baseline separated.
There's a valley point
between the two of them.
And if I create a new
processing method, by default,
this is how Empower is
going to deal with it.
It's going to say, OK, there's
a valley point between the two.
So it puts a little
diamond shaped-mark
between those peaks.
There are different options
in how we could deal
with integrating these peaks.
And if you look at the upper
right quadrant of the slide,
there we're using the
valley-to-valley timed event.
So the valley-to-valley timed
event would draw the peak start
and end to the valley point.
Or, I should say,
for the first peak,
it's going to draw the peak
end to the valley point.
And to the second
peak, it's going
to do the peak start
of the valley point.
Now, of course,
there, you're going
to have smaller areas
under those two peaks.
The lower left-hand
corner, you see
I've used a tangential skim.
So I've tangentially skimmed the
small peak off of the big peak.
And there, when you use
that particular event,
you have a start
time and a stop time.
And you have something
called a value.
And so what Empower does in
order for that tangential skim
to work is it looks at
the heights of the peaks
at the valley point.
The default value is four.
So that means if the main peak,
if that main peak's height is
at least four times the height
of the rider at the valley,
it will tangentially skim the
small peak off of the big peak.
You may have to
adjust that value.
So that's a tangential.
And I know you
can't see it here.
But if you look at the peak
table and you look at the int
type, again, as we saw in the
previous couple of slides,
for valley to valley --
sorry.
For tangential skim, you
would see a T for Tangential.
So if it's baseline,
you might see baseline
to tangent or
tangent to tangent.
It depends on the cluster
of peaks you're looking at.
In the lower right-hand
corner of the slide
using the ApexTrack algorithm,
we have the Gaussian skim.
And so this is not quite exactly
the same as exponential skim
in the traditional algorithm,
but it's pretty close.
And so you can see what
Empower tries to do
is it tries to skim off the
rider as if the main peak was
a nice Gaussian
symmetrical peak.
So these are some options
that you have in terms
of integrating peaks like this.
And of course, the
question that I get
is, well, what's the
right way to do it?
And there's no answer to that.
You need to decide
which technique is best,
depending on your application.
And then you have
to stick with it.
And just as an
aside because we're
talking about detection and
integration, one of the things
that I always stress is, if you
have an SOP in your laboratory,
look at the SOP
because the SOP should
have a picture of the
chromatogram with the way
the SOP says you should
integrate your peaks.
So before you get too deep into
it, if it's an existing method,
check the SOP and check how
the integration is set up
in the SOP.
Obviously, if it's
a new method, you're
just developing it,
that's a different story.
So in conclusion
here, you can see,
depending on which one I pick,
the areas for these two peaks
are definitely going to vary.
And then when we
get into Empower,
I'm going to show you
there is a detect shoulder
event that you can use.
And this would be
cases, and it's only
for the ApexTrack algorithm.
It would be cases where Empower
can't detect the valley point.
So it's going to pick
it up as a shoulder.
OK?
Alright, great.
So now let's see if we
can get over to Empower.
So I'm going to attempt
to share my screen.
So we've got some
questions coming in.
And so here we are with
the shoulder peaks.
That was the great question
that we had before.
So I'm going to --
so here's a good
case where we've got
some fairly good-sized peaks.
But once you get
into the baseline,
you can see there are lots of
other little peaks in there,
right?
So if I go ahead and I
set up some parameters,
and this is actually a
good time to look at that,
I'm going to say let's go
from, I don't know, about here,
about 0.7 to about 9-1/2.
So let's put that in.
This is usually how I set
up the ApexTrack algorithm.
Integrate that.
So I've got lots of
little peaks in here.
And if we look at
the int type, that's
what we were talking about
earlier, all of those letters
that tell you how the
peaks have been integrated.
And if I turn on detect
shoulders and click Integrate,
now you'll notice that some
of the int types have changed.
So if I zoom in on
this area, for example,
you see here there's
a little peak.
Here's a peak right here.
Big peak and two
peaks on its tail.
This peak here now has
valley to shoulder.
Without that event, which is
the detect shoulders event,
this looks like one big peak.
Let me repeat that
so you can see it.
Bear with me for a minute.
OK, so big peak,
another peak riding
on the tail of the big peak.
And you can see there's
another peak down here.
But Empower doesn't see that.
That's a shoulder.
That's a perfect
example of a shoulder.
So I'm going to right click,
Add Integration Event, Detect
Shoulders.
If I tick this box on,
it's going to show me
where the event starts.
I can't put a stop
time in for this event.
I'm just going to leave it on
for the whole chromatogram.
And I click Integrate.
So now it detects
that as a shoulder.
And you can see this main
peak is baseline to valley.
This peak is now
valley to shoulder.
That's the shoulder.
And this peak is now
shoulder to baseline.
You may also see, in
some cases, some of these
are really small peaks, you
may see an R, which is a round.
Like there's one there,
baseline to round.
Some of these are
very, very small peaks.
So when you turn on
that detect shoulders,
you'll see an S for shoulders.
And you may see an R for
Round, depending on what
the peak shape looks like.
Alright?
So that's a very small peak.
And the baseline is doing all
kinds of odd things there.
OK?
Can you go over the
ICH impurity portion?
Unfortunately, that would
take too long because that's
a rather involved piece.
And maybe that could be the
subject for our next Ask Me
Anything webinar.
So here in the processing
method on the Impurity tab,
this is the ICH impurity
portion just briefly.
And then I can
bring up an example
where it's already set up.
You would choose your
impurity response,
whatever that impurity
response might be.
Maybe it's the percent area
against the main component,
the API.
You can select your
main component here.
You can set your
reporting identification
and qualification
thresholds here.
Anything that exceeds
that reporting threshold
will be flagged as an impurity.
And you should report it.
If something exceeds the
identification threshold,
you need to take
steps to identify it.
And the qualification
threshold, if you exceed that,
that means this is a
potentially hazardous impurity.
And you need to do
additional testing
to check the biological
activity of that impurity.
And also have it
calculate total impurities
and say your maximum
allowed value for the total
as well as have it identify
a single maximum impurity.
You have the ability
to do impurity groups.
So to do impurity groups,
in the Component table,
you choose a component type.
So if I have a
bunch of impurities
that are low level impurities,
when I go over to the Impurity
tab, then what I can do is I
can group those together based
on that component type.
Alright?
So that would allow me to give
the sum total for that group.
On the right-hand side, you
have specified impurities.
So if you didn't want
to use these values here
for the thresholds because
these thresholds would apply
to all the peaks, let's say
I have some toxic impurity,
potentially really
toxic impurity,
I can set specified
impurities to have
specified thresholds that would
apply to only those peaks.
So this gives me an
idea for another webinar
that we could do together.
Now, that being said,
I do have a result
set that I can show you, where
it's already been worked out.
Just take some
time to set it up.
And here, you can see
this was already set up.
And if I go to the
processing method
and I go over to
the Component tab,
you see we've got
some component types.
And if I go to the Impurity tab,
I have my thresholds set up,
maximum allowed total,
maximum impurity.
And I've got one
specified impurity
that I'm concerned about.
This list over here,
this is if you're going
to do an adjusted total area.
You can exclude certain
component types,
which is actually a good
thing because a mobile phase
peak, a diluent peak, a system
peak, those are not impurities.
And we would not want to include
them in our calculations.
And I did a webcast
a while back where
we were talking about impurities
and talking about some
of the concerns on the point
of the regulatory agencies
that people are hiding
or potentially hiding
peaks when we, in fact, know
that they're mobile phase peaks
or system peaks.
And typically, what
do we do in that case?
We use inhibit integration.
Oh, I don't want to integrate my
system peak or my mobile phase
speak.
So I'll inhibit
integration in that region.
Well, this is where
some suspicion
comes in on the part of
the regulatory agencies.
So if you do it this way,
if you identify those peaks
with these component types
and then exclude them
from the calculations, then you
can justify what you have done.
OK, so that's a
very good question.
And now let me get to some
other questions that I had.
Hang on just one moment.
There's another
one that came in.
Will I get the same areas
and heights for my peaks
when I use traditional
versus ApexTrack?
OK, great question.
And let's clean
things up here a bit.
And let's go to a
different project.
Open this project.
Now I'm doing pretty well
with time, so that's good.
And if I take sample
to review and let's
just make that look a little
bit better and let's get
rid of that gray background.
OK, so here's an example, if I
zoom in on the baseline here.
It's not too bad compared to
other data that I have seen.
Got a bunch of peaks.
Most of them aren't
too bad, couple
of fused peaks here and there.
If I go in, and I should
have a method created here.
Let's see.
Here's the ApexTrack.
Well, we'll do it in reverse.
Let's do the traditional.
Integrate.
Alright, here we are.
So we have our
areas and heights.
Now I'm going to show
you one thing here
because I think it's important.
One of the things
that I get asked
a lot is now I'm trying
to set up my method
and I'm struggling.
Whether you're dealing
with traditional
or you're dealing
with ApexTrack,
the first thing you should
do is set the peak width
and threshold.
And I think a lot of
people overlook that.
And so the default values
for peak width and threshold
of 30 and 50.
And depending on
the chromatography,
that may not work.
So with the
traditional algorithm,
how do you set the peak
width and threshold?
Well, the first thing
you do is you identify
the narrowest peak of interest.
Let's just say it's this one
here at about 5.15 minutes.
Literally draw a baseline
in under the peak
and click the peak width tool.
And so it sets the peak
width, in this case, to 7-1/2.
How do I set the threshold?
Well, what you
should do there is
zoom in on a portion of
representative baseline.
You don't want to set it on an
area where you've got peaks.
Otherwise, you're going to
get a very high threshold.
Then your peak
detection won't work.
OK?
So once I've got that
done, and I'm also
going to add in
because I don't want
to look at stuff
outside of this region,
let's put an inhibit in here.
Oh, I don't know
until about 4.8.
How about that?
And then what we'll do is
we'll put another inhibit
on the back end of
this chromatogram,
so that we can just look
at peaks of interest.
And now I'll click Integrate.
OK?
So the traditional
algorithm is easy to work
with provided that you set the
peak width and the threshold
upfront.
Then you can add in your
integration events as needed.
And if you say to me,
well, the run is too long
and the peak width
and the threshold
don't work properly
throughout the run,
well, then you can always
right click, Add an Event.
And you've got all kinds of
events that you can pick from.
So you have a set peak width.
You have set liftoff.
You have set touchdown.
You have set minimum
area, set minimum height.
So you can always fine tune
it by using these events.
Just this note, any
event that begins
with the word set as a start
time doesn't have a stop time.
It's grayed out,
which means, here,
I've got a global
peak width of 7.5.
If I set my peak width,
that's not the one I wanted,
that one, OK?
If I set my peak width at 6-1/2
minutes into the chromatogram
and I set it to
some other value,
that new value is now
in effect for the rest
of the chromatogram.
So I might have to
put in a second event,
just reset it or set it
to a different peak width,
depending on what my
peaks actually look like.
OK?
So it's as easy as that.
If I'm going to do an ApexTrack
method, let's do that.
How do I set that up?
Alright, I'm going
to do ApexTrack.
With ApexTrack, I
determine over what portion
of the chromatogram I'd
like to have peak detection.
So 4.8 to 13.4.
And then what I do is
I let Empower tell me
when I click Integrate
what's the optimum peak width
and detection threshold
for the chromatogram.
So it looks at the second
derivative of the chromatogram.
And it sets the peak
width based on the tallest
peak in the second derivative.
And it also sets the
detection threshold
based on the peak-to-peak
noise in the chromatogram.
So again, setting these up
front, not difficult to do.
And if I need to optimize
it, I can optimize it
somewhere during the
chromatogram with the events.
Now let's get back to
the person's question.
If I wanted to make
a comparison of using
traditional and ApexTrack, OK?
And while I'm opening
up the report publisher,
I'll give you the answer now.
You will see differences in
the areas and the heights
because they are two
different algorithms.
So there is no surprise, OK?
There's no surprise.
So what I'm going to
do is I'm not going
to make a fancy report here.
I'm just going to show
you very quickly --
where's my processing method?
Here it is.
OK.
Now let's try to zoom in, so you
can see where the first peak.
You can see the areas are
a little bit different.
The heights are a
little bit different
because, again, they're
two different algorithms.
And so it should
not be a surprise
that there are differences.
Now, some of these, you
see a little bit more
of a difference than
others, depending
on what the peaks look like.
Some of the peaks in the
chromatogram are small,
and they're fused and
so forth and so on.
So yes, because of the fact that
the algorithms are a little bit
different, you're going
to see differences
in the areas and heights.
Is this a point for concern?
No.
Because if I run my standards
and I generate a calibration
curve and I run my samples and
I perform quantitative analysis,
it should not affect my
quantitative results.
Here's another
very good question.
Where can I get more
detailed information
on integration events?
So, great question.
And the answer there
is go to the help.
The help has a lot
of good information
about integration events.
Now, I bookmarked
the tangential skim.
Now let's just open
this window up,
so we can see the information.
For each of the
different events that you
find in your processing
method, whether you're
using the traditional algorithm
or you are using the ApexTrack
algorithm, it doesn't
matter, type in the event,
and you'll get
something like this.
You'll get an explanation
with a picture.
And the reason I'm
showing a tangential skim
is we talked about it before.
Tangential event has a
start time, has a stop time.
And it has a value.
And that value, get down to the
nice picture here in the help,
is it looks at the
heights of the two peaks
at the valley point.
And it says, the
height of the main peak
is greater than or equal
to the height of the rider
peak times the value.
Then it will employ
the tangential skim.
Or as the default value,
you can change that.
There's different
types of skims.
This is called a classic rear
skim, as you saw in my slide
earlier.
But what if I wanted to
skim the big peak off
of the little peak?
Alright?
So what we would do is we
would change the value.
You see, for the
classic skim, the value
is greater than or equal to 1.
For the non-classic
rear skim, it's
going to be somewhere
between 0 and 1.
If I want to skim
the small peak off
of the front of the back peak,
I can put in a negative value.
So this is why I say the
online help is terrific
for these different things.
If you have any
event, search for it.
Here, let's take a
different one, OK?
Allow Negative Peaks As
Topics, alright, here it is.
Display it.
Lots of stuff to read.
But of course, what's really
helpful are the pictures.
Because you can see, in
this particular case,
maybe I've got a
refractive index detector.
I've got positive and negative
peaks in one chromatogram.
OK, so if I turn on the
Allow Negative Peaks,
that would allow me to
integrate the negative peaks.
Alright, so please go
to the online help.
And there's lots
of good information
on these timed events.
OK, here's another
very good question.
Sometimes I have to do manual
integration on peaks and sample
chromatograms.
How do I get Empower to
keep the manual integration
when I reprocess.
If I reprocess, Empower uses
the original integration
parameters.
So great question.
And I'm going to first
give you an explanation.
And I can show you here.
We have some time left.
Alright?
So if I have my sample
set and I process
that sample set and I get
a result set, and let's
just take this result set here.
That's a blank.
So it doesn't look very nice.
But there is a standard.
So, OK, we have peaks.
Great.
So let's say I come to a sample.
And in a particular
sample, I'm not
happy about some integration.
Now, these are actually
pretty good peaks.
That's obviously a
subjective statement.
But let's say I'm not
happy with something
in the integration in one
of these chromatograms.
And then I decide,
OK, I'm going to go in
and I'm going to manually
reintegrate the peak.
OK?
Let's say I don't
like that peak.
OK, so I just did a manual
integration on that peak.
Remember the int type
reflects the fact
with the lower case
letter that I've manually
integrated that peak.
So what you have
to do in this case
is go to File, Save, Result. OK?
To exit out, you right click
on the result set, Process,
and you're going to
use, in this case,
I'm going to use the
processing method.
I think it's this one.
And you want to click
Use Existing Integration.
And we're not going
to recalibrate.
We're just going to
requantitate those samples.
You could calibrate
and quantitate.
And now what it's going
to do is it's going
to use that manual integration.
It's processing the sample set.
If you go out and you reprocess,
go back to the sample set,
and you say reprocess,
guess what it's going to do?
It's going to use the
original processing method.
And you won't get the result
with the manually integrated
peak.
OK?
So now if I go to Review,
there's today's --
I'm sure this is --
wait here for this to open up.
I don't remember
which one I did it on.
But look.
I have a feeling I
used the wrong method.
So I apologize there.
This is 3D data.
That's probably
why that happened.
Alright, so what you
have to do, again,
is, once I've reviewed on a
2D set, let's just use that.
I'm running down on time
here, so I got to watch out.
OK, so again, if I do something
like this, for whatever reason,
I'm not happy, right?
I save that result. Close it.
And let's just see
what processing method
we used here in this case.
Oh, it's called inform PM.
Find.
So I right click, Process,
Use Existing Integration,
Quantitate Only.
It's going to go
off, and it's going
to use that manual integration.
Alright?
So if I look, there it is.
Now it's used the
manual integration.
So the key is make
sure that once you
do the manual integration,
go up Save File, Save Result.
Back out.
Right click on the
result set, Process,
Use Existing Integration.
And that'll use any of
the manual integration
that you may have applied
when you first was working
with this particular thing.
And I think I realize now maybe.
If I can't find it, so be it.
I don't remember
which one I did here.
Ah, there it is.
It did work.
Alright?
Good.
So we're getting down on a time
here, and I've got to watch it.
Any questions that
we didn't get to,
I will make sure that I
answer in an email to you.
So sorry for the delay again.
And sorry for the
running out of time
here with getting to
all the questions.
Remember to check out my blog
at Waters.com/Empowertips.
You can subscribe.
The advantage to
the subscription
is you get an email every
week with the entire tip.
So you don't need to
go back to this URL.
And the Waters
Knowledge Base, if you
haven't looked at the
Waters Knowledge Base,
I encourage you to do so
at support.waters.com.
Because the nice
thing there is we've
got somebody that's
moving all of the tips
into that knowledge base.
And you can actually
search on a topic.
So you can search on like I
did a whole series on dealing
with QDa data with Empower.
You can search on that.
And then you can drill down
into the Informatics area
and into my Empower Tips.
And you'll find
all the tips that
are related to working
with QDa data and Empower.
Somebody just emailed me
about that the other day,
and so it's a good
point to make.
Check out that
knowledge base, alright?
So we're going to accumulate
any of the questions
that we didn't get to.
And I know there were
several we did not get to.
I apologize.
And I think we're just
about out of time.
So thank you very much
for your attention.
