Hello
My name is Thales Sehn Körting and I will
present very breafly how the kNN algorithm
works
kNN means k nearest neighbors
It's a very simple algorithm, and given N
training vectors, suppose we have all these
'a' and 'o' letters as training vectors in
this bidimensional feature space, the kNN
algorithm identifies the k nearest neighbors
of 'c'
'c' is another feature vector that we want
to estimate its class
In this case it identifies the nearest neighbors
regardless of labels
So, suppose this example we have k equal to
3, and we have the classes 'a' and 'o'
And the aim of the algorithm is to find the
class for 'c'
If k is 3 we have to find the 3 nearest neighbors
of 'c'
So, we can see that in this case the 3 nearest
neighbors of 'c' are these 3 elements here
We have 1 nearest neighbor of class 'a', we
have 2 elements of the class 'o' which are
near to 'c'
We have 2 votes for 'o' and 1 vote for 'a'
In this case, the class of the element 'c'
is going to be 'o'
This is very simple how the algorithm k nearest
neighbors works
Now, this is a special case of the kNN algorithm,
is that when k is equal to 1
So, we must try to find the nearest neighbor
of the element that will define the class
And to represent this feature space, each
training vector will define a region in this
feature space here
And a property that we have is that each region
is defined by this equation
We have a distance between each element x
and x_i, that have to be smaller than the
same distance for each other element
In this case it will define a Voronoi partition
of the space, and can be defined, for example,
this element 'c' and these elements 'b', 'e'
and 'a' will define these regions, very specific
regions
This is a property of the kNN algorithm when
k is equal to 1
We define regions 1, 2, 3 and 4, based on
the nearest neighbor rule
Each element that is inside this area will
be classified as 'a', as well as each element
inside this area will be classified as 'c'
And the same for the region 2 and region 3,
for classes 'e' and 'b' as well
Now I have just some remarks about the kNN
We have to chose and odd value of k if you
have a 2-class problem
This happens because when we have a 2-class
and if we set k equal to 2, for example, we
can have a tie
What will be the class?
The majority class inside the nearest neighbors?
So, we have always to set odd values for a
2-class problem
And also the value of k must not be a multiple
of the number of classes, it is also to avoid
ties
And we have to remember that the main drawback
of this algorithm is the complexity in searching
the nearest neighbors for each sample
The complexity is a problem because we have
lots of elements, in the case of a big dataset
we will have lots of elements
And we will have to search the distance between
each element to the element that we want to
classify
So, for a large dataset, this can be a problem
Anyhow, this kNN algorithm produces good results
So, this is the reference I have used to prepare
this presentation
Thanks for your attention, and this is very
breafly how the kNN algorithm works
