What we're searching for today is very
simple. So, it's the answer to how do we
decode the wonderful code that we
created just about what a week ago now.
Something like that.
Let's just remind ourselves where we
were with this code. It was a 5-bit code.
Coding theorists will talk about this
code, which i'm going to write out, as being
a [5, 2, 3] code. I'll fill in the details and
then I'll refer what I've written down to this.
Remember also that these are the exact
powers of 2. That's 2 squared, the number
two position is 2 to the power 1, the number
one position - well that's two to the power 0
But that leaves bit three and bit five for
the actual message bits. Two bits - 
bit 3; bit 5. Two bits: four combinations
possible. And those are the four San
Francisco weather states. So I'll sometimes refer 
to these as "info bits", "message bits", it
comes to the same thing. That's the
message you're trying to get across.
That's the message that these parity
bits are there to check out and make sure
it's ok. And we ended up with one codeword -
if you remember that's the "in" phrase
for these things.
The 00 state in message terms, I think, we
said was "foggy" this time around. And
here's the protection of the parity bits
There we are, all written out again now. Coding
theorists would call this a [5, 2, 3] code.
How does that work? Well, it's a 5-bit
code. That's what the first number means.
It means total number of bits. The 2
means the number of message or
information bits. And this, if you
remember, is this business called the
"distance". How many bits differ between
these rows? So the "distance" in this
technical usage of the term, here, the
number of bits that differ between that
line and that line is 3. And to get one
of these codes working you need a
minimum distance of 3. And what do we
mean by "working"? What we mean is that a
distance-3 code can correct a 1-bit error.
And for those of you just yelling at me:
"But what's the general formula then?" 
- for what you can correct, for a given
distance in that?  Watch carefully.
It's a one-liner more or less. Floor of 
(d -1) / 2 where 'the floor of'
means 'round down'. So let's do it for distance 
three.  3 - 1? Two.  2/ 2? One. Round down 1?.
It's already rounded down. So that's telling you.
Using the "floor of" function it's saying:
"For distance three you can detect and
correct one error". So bearing that in
mind if we see things in future with
different distance properties at the end,
we can always apply this to find out how
many things they could correct. The
powers of two and the parity check bits.
What sequence of numbers were they using
[or] checking up on? This first block the "1".
This checks itself and 3 and 5.
The 2 bit checks itself and 3. And the
4-bit checks 4 and 5. Where do those
come from?  How do you get those lists?  And
I think, last time, I perhaps didn't make
this quite crystal clear.
So, let me explain that those come from
effectively saying - for all of the things
that aren't powers of two -
how could you build them up from adding
together powers of 2?
1: you don't have to build it up, that's
itself. 1 is 1. Similarly 2 is a power of two
and it's just itself. Where you really
have to start doing this powers of two
"add them together to build them up" thing
is with 3. The most compact way to
represent three as sums of powers of 2 is
1 + 2. What about 4? No problem. Four
is itself, it's a power of two.
When you get to 5, you say "Ah! The most compact
way to do this is 1 +4". Powers of 2.
Six? 2 + 4. Seven? Quite
complicated now. But if you think about it,
sum of powers of two that add up to seven, in the
most compact way you can do it? 1 + 2 +4.
So really, these lists that
we had previously - about what checks for
what - is as a result of writing these out
first and then saying: but if we were going
backwards
where does the digit 1 appear? It appears
in itself. It appears in the
formula for 3; it appears in the formula
for 5; it appears in the formula for 7.
So that's where that first list
came out here: that 1 checks up on 1,
3, 5, 7, 9, 11 - all the odd
numbers.  Because 1 would appear in the
sums of powers of two that build those up.
Two? Yes, 2 appears obviously in its own list
but it appears first of all, next door, in
3 as being 1 + 2. Does it appear in 4 ?
No. 4 is all on its own - it's a
power of 2. What about 5?  Would 2 appear
in 5? No! It's 1 + 4. So, the next place that
2 would appear is 6 - which 
is 2 + 4. So I hope if you ....
>> Sean: So, if you had a 6-bit code, would 
2 have to check on ....
>> DFB: On one extra bit?
Yes it would! Similarly 4. It checks on 4;
it checks on 5; it also checks on 6. Because 6 is
2 + 4. So, if you're building up these
lists and making them longer - to do more complex
codes - then if you're encoding and decoding
in this by-hand method, you need to keep
up-to-date your sums of powers of 2, for
all the new positions - unless they're exact
powers of 2 - and then go backwards and say
"Ah! but these are my checklists that I
have built up from that". So, just to remind you
then, of what happened, was that on this one
here, let's take this second one. The
information, or message, bits are 0 and 1.
But what it tells us here - that's bit 3 and bit 5 -
but it says here that bits 1, 3 and 5, taken
together must be even.  Well [bits] 3 and 5 - 
0 and 1- 
add them together, that's 1. So
therefore bit 1, which we'll be filling in
has to be 1, to make it even. We're going 
to say -- because you all want to know
how to decode it and detect errors and
correct them! This one here is going to
be badly transmitted. Instead of 10011
it is received as 10111. 
>> Sean: So straight away whoever
gets that is going to say 'That isn't right!
>> DFB: They're going to say 'That isn't right' because
there's so few of these - there's only
four of them. You get to know them like old
friends. But you imagine if you've got 64
of the so-and-so's can you guarantee
that you'll be able to memorize every single one?
Er, no! You need an algorithm. And what we do
here is a reverse of what we did when we
encoded. we say let's look at the list
that follows on, and is checked from 1. 
1,3, 5 and so on.
That's what we received: bit one is 1;
bit 3 is 1;  1+1 is 0; bit 5 is 1; 0+1 is 1.
Aaaagh!!  It's supposed to be even parity. Wrong!
It came out as odd parity.  Bit 2 checks
out on itself and on bit 3.
It doesn't occur in bit 5 because 5 is 1 + 4,
not 2 + 4. So you look at bit 2
and bit 3. 0 XOR-ed (or added if
you like) to 1. Its a 1 !  It's odd
parity! It's wrong!
OK, now you look at the 4-bit and you say
Bit 4 checks out 4 and 5.  4 and 5: 1 + 1.
It's zero. Hooray!
It passed the test.
>> Sean: Yeah,we failed two tests. We can work 
out from that now what went wrong ?
>> DFB: Yes you can. very simply, because
the headers of these lists - are the powers of 
two that they check up on in all of those
lists. And if the [bit] 2 "1" has gone wrong and
the [bit] 1 "1" has gone wrong, then the wrong bit
was 1 + 2 - ordinary addition this time,
not binary addition. 1 + 2 makes 3.
Bit 3 is wrong! 
>> Sean: So then we flip bit 3 and we've
got the right column?
>> DFB: You flip bit 3. 10111 that's the bad bit.
It's received as a 1.
It's wrong, so it must have been a 0.
10011, magic! Does that look familiar?
That's what you correct it back to.
And it's entirely done by getting these
lists of powers of two - doing another
check on them - almost like ... It's exactly
the same as when you were encoding. It's just
you're doing it again. That's wrong - it doesn't add up!
>> Sean: So you simply reverse the process?
    I mean, does that always work then? 
>> DFB: Yes! 
>> Sean: For any of those bits - the parity bits .... ?
>> DFB: Ah! now that's a good thing. If you're
thinking: "Oh! but that's a message bit" Oh no! - 
come on - something really went wrong
there because what was transmitted as 0 1
- that's 'Sunny' wasn't it? - has turned out as
.... 'Rainy', 11, wasn't it? You turned [it] into 'Rainy'.
Oh yeah! That's fine, but what if you hit
the parity bit? Surely that messes
everything up? No it doesn't.
It's actually dead easy. And I
want to leave as an exercise for you 
[the viewers] to do is, this time
10011, don't hit a message bit. Hit that
parity bit at position 4. Change it to 0.
Do the checks and you'll find out that the "1"
list passes with flying colours,
nothing wrong with it. The "2" list passes
with flying colours.
The only one that fails is the "4" list. 
So, if it's only one fails that's it: 4. It's 4,
Nothing to add to it!
>> Sean: You've done the homework for them!
>> DFB: I've done the homework. Again! So do it for
yourself, on any bit you like, and convince
yourself that it doesn't matter if it's
message bit or parity bit - this method
will come in on it. I have built this up
and I have explained to you now. You can
go into a pub, full of coding
theorists and say: "Hey! Like I've got this
[5,2,3]-code" and they will say: "Well, you
realize that you derived it using
Richard Hamming's algorithm but it's not a
true, proper, Hamming code because it's
not perfect. And you say: "Perfect ?!"
"What's Perfect?!". I think we have to go to
another video, Sean, I know they hate cliff-hangers
but, yes, real Hamming codes are "perfect".
They really are.
[trailer for follow-up  EXTRA BITS video]
Now the only sort of, if you
like, flight health-warning to say about
this - just to round this off now - is by all
means do it by hand. If you want to code
it up as a program, great, you'll learn a lot
but don't run away with the idea that
this is the most efficient way to do it.
