 
So now we move into the science part
of who we are. It's fundamental.
We are scientists.
It's not a bunch of touchy-feely BS. Pop
psychology sometimes gives us a bad name.
People misunderstanding what we do
doesn't help at all,
and we're not all that good at promoting
ourselves publicly. So a lot of people
don't know what we do or who we are.
And there are so many kinds of psychologists and many people think there's only one or
two kinds of psychologists
but at a fundamental level all people
who are psychologists
are scientists. We follow the scientific
method
because it helps us arrive at a better
understanding of our patients if we're
clinical
or our participants if we're
research-oriented,
right? We want to understand the human
experience and if we want to do that in any kind of
way that's beneficial
generating knowledge that people can hang their hat on it, we've got to do it in a
scientific manner.
So, goals of science, measure, describe,
understand, predict, and apply.
There's not always an obvious
application when you're doing some kinds of
research.
Some people are just interested in
fundamental principles. They want to know
why things are and they do studies, maybe
case studies,
maybe correlational studies, maybe experimental studies, there's slots of kinds of
studies,
but they do studies just to find out why
something is,
or how it works, and they don't do it with some kind of long-term
idea in mind that it will be useful to
some group of people but oftentimes it is.
So we do the science because we have
the questions and we engage in empirical
work. Maybe I should make that one
yellow again. Empiricism is a word that apparently some people don't really know
well. Empiricism is an attitude and an approach
to understanding subject matters by
holding yourself as objective as
possible.
That's a tough thing to do. It's a tall
order to be objective, to be empirical,
because we all have biases, everyone
of us, self included,
that lead us to believe certain things, and
because we believe certain things we make
conclusions without ever checking them,
and sometimes you'll check them but you'll kind of interpret it
the way you wanted to see it to begin with. To be truly empirical,
is a tall order because you're going to have
to hold yourself somewhat neutral.
You can't be totally neutral. You've got a
hypothesis in mind
or you wouldn't be doing the study, whatever the study is,
whether it's in physics or chemistry or
biology
or psychology. You're doing a study because you think something's going to happen
a certain way and you need to test it
out. So you
make a tentative statement, you could call
it an educated guess but I would say
that's not a good word for it but
many people call it that but it
translates into a research question
and a possible answer based on theory
and previous research.
So it's not an educated guess like well I'm a pretty smart guy, I think this is going to
happen.
You're basing it in theory and previous
research which
gives you information about what's already
known and what
isn't known, right? It's not just me
waxing
philosophical because I have some kind of idea how somethings going to go.
I want to get into literature. I want to see
what studies have been done,
what studies haven't been done, what
conclusions have been drawn, which ones are
really tentative and shaky,
which ones seem to be firm and then
that's going to drive my
theory that I hold personally that I would
test
through hypotheses that I would enact
through research
of various kinds. Now when I research
something
I need to be really clear about what it is. I've got variables that I'm studying
but people use language in all kinds of ways and I can't just use it
willy-nilly. I've got to be very specific
about what I'm talking about.
I need an operational definition. Operational definition being one that describes the
actions or operations by which
I'll measure and/or control the variable. The
researcher creates that.
The researcher is the person who designs
the study
but I can't define it one way and another
researcher define it another way and another
researcher define it
another way and another one define it another way and us all be able to see
each other's results as equally valid
because we're kind of talking about
different things each time.
So an operational definition gives a clear
set of circumstances,
specific criteria that others will know
exactly what we mean.
So if I say aggression, you all know what
aggression is?
Everybody knows what aggression is. Now how do you operationally define aggression?
Ah!
Well, you could get into some philosophical
notions.
If I am walking down the hall and I'm looking over here and I bump into somebody else
and knock them down, is that aggressive?
Probably people will say no, no, you didn't intend to do that, that was an accident. That was a
mistake,
right? So we call - what - intent to harm
is a possible
part of our definition, right? Did you intend
to harm that person. If you intended to
harm the person,
we would say well that would be aggressive wouldn't it? You were trying to do them harm; might be
physical harm,
might be psychological harm. You ever flip somebody off and you're just kidding?
Right? So you can't just say well flipping somebody off is aggressive because sometimes people are
kidding.
They're not intending harm. So you can't
just say well it's this behavior or that
behavior.
You have to operationally define it. So let's look at say aggression
in pre-schoolers. I want to study how
aggression emerges in young children for
example.
I might start talking about ways that I
could measure aggression in
pre-schoolers.
Now typically I can't ask them what they
intended to do.
What were you thinking? They'll go, I don't know. I don't know.
I don't ever know. Is it good or is it bad? If it's bad I definitely don't know
and if it's good I might know, right? What are you trying to get at?
Toddlers aren't dum. They're smart, right?
Pre-schoolers.
So you start - say - well maybe pushing
behavior.
Now if I'm pushing somebody on a swing that's a different kind of a push,
but I say well in the classroom
during non-recreational time, if I see a
child push another child
I might catalog that as an aggressive act.
One instance thereof. But I can measure
what?
Frequency - how often it occurs. I can measure how long it occurs,
right, duration. And I can measure intensity. So
a shove like that might not be a big deal
but a shove where I knock you out
is of a higher intensity. If I shoved you all
the time,
right, that's a higher frequency. If I shove
you till the teacher comes and pulls me off of you
then that's a long duration. So now we're
getting very specific about
our variables. Why? So that we can agree
to some degree
on what it is we're actually measuring.
Because if we can't agree on it and we
won't always agree on it
then we have nothing, right? So if I'm using the behavioral perspective then
I've got some defined behaviors that I
would say account for aggression
but one person might see that act and go, well that wasn't really a push and another person
goes well it really was a push.
So you have inter-rater reliability
issues
but if you have it well-defined, then most
of the behaviors that occur
will be coded the same way by different
people
insuring some kind of reliability of the
measure.
So, operational definitions whether I'm
talking about
behaviors or whether I'm talking about
personality variables,
do you have a self-esteem, do you know what a self esteem is? Everybody goes well I know what
self-esteem is.
Well how will you measure it? How do I know that you're even measuring self-esteem
if you say you're doing a study on it?
That's going to require an operational
definition that's going to have to be
convincing to me
as another researcher.
And I look at all of this in the context of
theory and theory is very important and
when you say
theory you haven't proven anything.
When we prove stuff we call it a law, right?
Is it the theory of gravity
or the law of gravity if I drop this? What do I say?  I say I drop
this. If I just let go of this, what direction
will it go?
I wonder? No you don't! That's the law!
Right? Not the theory of gravity. Relativity
is a theory but the practical function of gravity on Earth is not in
question.
Nobody wonders if it's going to go up if
I let go of it, right?
But theories of human behavior are never
never quite that certain.
In fact most theories are never quite
that certain. It's not called the Law of
Evolution, it's called the theory of evolution,
right, meaning there are still some
questions. It's open to interpretation.
You could put forward maybe some other
ideas
that could explain the data equally or
better than the previous ideas.
That's the whole notion of theory to
begin with is I have a set of
inter-related ideas that explain some phenomenon
in a way that I think is clear and
convincing.
Well how do I know it's clear and convincing? Cause I should be testing it,
and if I'm being a truly empirical
researcher, I should look for evidence to
falsify
my theory cause I could set up my
studies to kind of
go the way I think they ought to go,
or I could set up studies to disprove my
theory
and those are going to be more convincing
because if I fail to disprove it then
that's more evidence in favor of it,
but it doesn't ever say that I have
proven it. When you see those
those commercials that say clinically
proven
what they really mean to say is we've
done a study
and it went the way we wanted it to go, but
frequently the companies
are finding those studies and the scientists
have a vested interest
in the results going the way the company
wants to go.
So proven is a word you shouldn't be using very much at all in
science
and certainly not out there in the general
public. But the general public doesn't
know any better.
Not because there's something wrong with them but because maybe they hadn't been to high school
and done science classes. Not everybody has,
not everybody was paying attention, not
everybody really grasped the implications
of the high school science classes, but
when you get out here to college and you
get challenged again and again and again
you start realizing - oh -
that's why you don't say it's proven
because maybe it was just a,
you know, maybe it was just some kind of
implications from the type of people
you were taking data from, maybe it was the design of your study, maybe it was the way you
analyzed it with a particular set of
statistics, if I'd used other
statistics that didn't violate some
assumptions I might have got a different
conclusion.
And you start taking things with a grain of
salt. Don't become a cynic,
but be skeptical. When people are trying to tell you what do with your life
they're almost always saying they know
for sure. Well how do you know for sure?
We go to look to science hopefully and we'll see whether these ideas have been
supported by the data
or whether they've been refuted by the
data, or whether different people find
different things and it's really
unresolved
and we don't know though we're looking. You'll find that a lot in
psychology. We don't have the answers to everything. I don't know why you dream.
I just know that you do dream and even
if you don't remember you dream
there are some types of measures we can
take that would indicate that you are
probably dreaming whether you remember
it or not.
We can say a lot of things about sleep
cycles but we can't say fundamentally
 why it is you dream, much less why you
dream a particular dream.
It's not that we're not looking, and
that's what scientists do,
but it's unresolved and people who tell you
that they know,
you've got to be careful. Scientific method I
assume you know about but you know it's
worth a refresher look.
You've got to formulate your hypothesis based on some theory.
It is not just some wild thing you're going to reinvent the wheel.
Right? You go to the body of literature
that's been examining this for as long
as they've been examining things such as
whatever it is you're interested in
and you see what's been done and what hasn't been done and then you find a critical
question that may be unanswered or might need some
revisiting to see if it's still valid or if it
was ever valid. Replication,
we would call that, right? And then you
derive some kind of a hypothesis.
Well, if I've got a research question or a
hypothesis, I've got to have some way to answer
it.
So I'm going to design a study and there's
lots of different studies you could do.
I want to answer the question.
I've got to figure out a method to do that. In
other words I want to be systematic about
it
I want to be careful about it. I want to make sure that I am going to design a
study
and to describe it if it makes it
into the literature in a way that somebody
else can read it and replicate it. In other
words,
do it again, cause finding something once
may be pure chance and may be an artifact of your sample, may be
a flaw or a particular factor in your design
or your analysis procedure, right? Doing
it again with other teams in other
places and finding the same thing
lends strength to the findings, right? So, I want to design a study
that other people can understand what I
did and I want to then collect data
using that method, whatever study design
I have, there is some data to be collected.
I might get institutional data. I might
like examine
the school records of kids in junior
high school,
maybe years after I watch them in
preschool,
and see whether or not they got referrals
to the office
based on ratings of when they were what we called aggressions
when they were in preschool. There's all kinds of data you can bring to bear on the questions
but you've got a method
and then you get your data and you analyze it, and you analyze it using methods
that will determine the significance of
the results.
Significance here meaning that it was not
due to chance.
If your results are statistically significant, that means they were very
likely
the product of an actual phenomenon that
you measured with some kind of reliability
and some kind of validity
because you might not be measuring it the
way you think you are. You may not be
measuring what you think you're measuring.
And you've got to be careful with stats. Some people don't know how to do stats.
You should not be afraid of STATS
Don't be afraid of stats and probability.
It's a type of mathematics. I struggle
with it. I worked on it really hard, really hard,
as an undergraduate and got a B as an undergrad. Hardest B I ever earned but I really
didn't have the fundamental underlying
knowledge necessary to
truly make sense of it, thus I was kind of
scared of it and I didn't engage it fully
but they've got all kinds help out there that they didn't have when I was
an undergrad. You go to khanacademy.org
right old Sal Khan will teach you stats
in 10 minute video clips on YouTube and
then give you a software to test
yourself.
The reason you want to know your stats is
not because you're necessarily going to
become
a scientist but because you want to become scientifically-minded as a citizen
because so many people tell you what you
need to know based on their data
and if you don't know what data are and you don't know how data are analyzed, then you
are at the mercy
of their conclusion without any ability to
check the validity of it,
right? So, we're going to go through some things and you should start asking, well what's your sample?
How did you get your sample? What's your study design? What was your analysis?
Things of this nature. We will look at
some stuff here shortly
that you'll go - oh - well I always thought mean meant that and it was very conclusive. Well means can be
skewed.
Modes can be bimodal or trimodal or multimodal, right?
You start thinking in terms of standard deviation, you get a better handle on it,
whether you become a scientist or not.
So, bearing in mind that statistics can
be massaged
and manipulated, and certainly studies can
be designed
to find what you want to find - if I say, have you stopped beating your children yet - yes
or no?
No, I haven't.
Oh, you're still beating your children. Oh, yes I have. You used to beat your children?
Oh no, neither one. You see what I'm saying. I can hamstring...
I could have hamstrung the person
answering the thing - hamstrung hamstring
hamstring-ed...
I don't know, I can't conjugate hamstring.
I can make people answer in ways I want
them to answer by simply rigging the way I
ask the question
or presenting the kinds of responses
that I'll allow in my study.
Look at political polls all the time. They talk about a margin of error but they
very rarely talk about how they
acquired their sample, what their
statistical
mode of analysis was, much less how they designed the questions and what the
answers were, right? So, it might mean just what they want it to mean,
it might mean nothing that you think it
means.
Once I analyze the data and I'm assuming
here that we have engaged in an
empirical process
where we're being genuine, as genuine as we can be as biased people,
to formulate a true hypothesis from the literature, to design a study that should be
able to answer it with some limitations
probably
in play, going to collect my data in the cleanest method possible with the
greatest respect for my participants possible,
and then I'm going to analyze it with the appropriate statistics
and draw some conclusions based on the
data. Now, if my hypothesis is not supported
it doesn't mean necessarily that my
theory is wrong.
The first thing I should wonder if I
don't support my hypothesis, especially
if it's based firmly in the literature
is did I do something wrong, right?
If lots of people have found one thing and I
don't find that, it should make me wonder
did I get a biased sample somehow? Did I have a flaw in my study
design?
Is there some kind of problem with the
data that I have. Maybe I'm off by one
column in Excel cause I cut and pasted
just a little bit wrong
that made everything wrong, right?
Maybe I picked the wrong stat or maybe
I didn't understand the stat I needed
that would best fit the kind of data I had. Those are all possibilities.
But if I can go through all of these systematically and especially if I
replicate having eliminated those options
and I still don't support my hypothesis
I might start to think my theory is
wrong. Something is wrong with my theory.
Right? That is as important as supporting the theory.
They used to think as we will see when we get to observational learning that girls just
weren't aggressive.
They weren't physically aggressive. They were incapable of it.
Turns out when you design a study just right at a certain period in American
history
you can get them to be aggressive and
now they can be as aggressive as they
want to be.
They're still statistically speaking not as
physically aggressive as males but
they're aggressive,
socially aggressive too. Males are socially
aggressive,
right, excluding people, starting rumors,
talking smack behind their back.
Designed to hurt, right? Intimidate.
Bother somebody. Injure them
psychologically if not
physically. So you start looking at things and you go well the
data have never shown that girls were
aggressive. Well, depends on how you
designed your study.
So now you design it like Albert Bandura does and we'll talk about the Bobo
doll study.
You should read about it. And you find out, oh yeah, they have the capacity to become aggressive
but the society they find themselves in
really discourages that behavior
in female children such that they're
less likely to do so.
It's not that they're incapable or
genetically unable
to produce physically aggressive
behaviors.
So finding something that you didn't
expect
sometimes tells you a lot. But sometimes it tells you that you're study is messed up.
And if necessary, do it again.
Then report the findings in a journal. Not on Bob's psych page.
Right? Not because you can just put it out there you did some study.
But because you have faith in your abilities as a scientist, you submit it to peer review.
You send it to a journal that has an editor and they have a reviewing
board and they send your sweet, awesome manuscript out to other scientists who have
similar training and they tear it up.
They're like what is this over here and what were you thinking about over there. I'm not sure you can use that kind of stat here and I'm not even sure you can
draw that kind of conclusion from this kind of a research question. They tear it up,
not to tear you down but to build up the integrity of the work,
if you haven't thought about these things,
you should think about them. It may send it back to the drawing board and do it again or it may be something you can fix in the manuscript, ah, they just want it made clear here, 
you need to be much more explicit about this
right? and by the time it emerges in the scientific literature
then we, generally speaking, as citizens, have some faith 
that it has validity, that it is worth knowing, that it is worth hearing about,
but you've got to use caveats all the time and good scientists always use caveats.
The media don't use caveats.
They say, oh, studies have proved that
you should do this,
and studies have proved that you should do that and a new study came out today suggesting this
and if you go and actually read the studies it doesn't say we conclude that this is the
way it is,
it says within the context of this
study under these circumstances
we draw forth from this day that this
possible idea that might be
generalizable to other people
in our society. That's how it will be worded there.
