[Brady] What are those zig-zaggy lines I keep seeing on videos on YouTube?
[Prof] Wait, you mean ones like this?
[Prof] So you can see on the BBC's news ticker here
we've got a lot of this sort of combing
and ziggy-zag like effect on the
video what's going on that
well this is an interesting problem but
the basic problem is that computer
people don't understand video what
you're seeing there is an artifact of
the way that the video system to put
together in the thirties when they
developed the first video systems are
designed them using analog electronics
remember this is about 10 years before
the first computer was invented the
first electronic computer was invented
so they're having to develop the video
system using pure analog electronics and
so they had to make sensible design
decisions on that time to encode the
video so they could transmit it and get
into people's homes where they can watch
it on the television screens that's all
well and good but we've got modern
computers and we've been some movement
back since then why if we got was
exactly like this and hopefully steps
well we'll come to that was actually
look at what is actually happening how
these things are put together now Sean's
hopefully give me a bit of computer
listing paper which is great because
divided into lines so the way that the
image was built up is that every 5000
second we talked about in the previous
video why need to do it 50 frames per
second to get decent motion rendition
the camera would scan the image from the
top left to the bottom right and it
would go along the first line going back
to the beginning and go on the next line
swing back and then go along the next
one and so on until eventually it comes
to the end at which point it goes back
up to the beginning of the frame and
starts during the same thing but there's
a problem
the amount of day 2 there is generated
scanning was formed and five line TV
back in 96 in the UK at 50 frames per
second was too much that could be
reliably transmitted with the technology
of the time there's too much information
that you need to transmit too much
bandwidth will be taken up so they need
to do something they couldn't go down to
25 frames per second because then you
would flicker like crazy as we talked
about in the last video so they couldn't
reduce the frame rate this lot of
shooting for
the frames per second to get the frame
rate so they came up with a trick which
they call interlacing so if we start
again if we call the first warm align
warm
the second one is line 2 3 4 5 and so on
what they said was we will transmit
first line one
so you scan across Longbourn like so and
then we'll skip over line to and
transmit line three
so we fly back and we transmit Lyme
three and then we skip over the line for
and transmit Lyme five and so you do all
that until you get down to the bottom of
the image you transmitting only the
odd-numbered life so in that 50 the
second one thing the whole frame you
send every of the line or half the frame
and they refer to that as a field and
then you go back and scan the even
numbered lines so 2 4 6 8 and so on to
scan all the even numbered lines in the
next 50 per second so what you actually
ended up doing you send your first field
which would be all the odd lines and
then offensive a second later you
sending the second films that got all
the even lines in it and then you
sending the third field which got all
the old lines in it again and so on so
you sending all the guidelines for the
even lines or the outlines now because
this is all been doing with analog
equipment you couldn't store the image
and send the old lines and send even
live in the same point in time so when
you capture the odd lines here this
would start at time zero when you start
capturing the even lines it's a
54-second later so you capturing this 20
milliseconds later and so on so each
field is sampled a different point in
time so you've got 50 discreet images
captured but each of their only has half
the number of lines and they have a
different half in that that's fine and
you transmit that you can record that I
my loved video tape you can transmit you
can do all sorts of processing with it
until you start coming to put into
computers and because what happened was
is that people started to treat it
always say well actually people still
talk about things being 25 frames per
second in UK they never were they were
always 50 fields per second so when he
gets pushed into the computer
your computer will capture the first not
filled never capture the second even
field and it will start to interlace
them back together to create a single
frame about the other things in that in
the actual image and he puts them
together and stores them in the
QuickTime file in the ABI whatever it is
he using at 25 frames per second
now that's fine because you can play
them back out of us or capture card and
mid nineties back onto a TV and it
looked files because it wouldn't pick
them and send them out in the right
order
the problem comes if you then try to
display that image directly on screen in
that because things are moving between
each of those things you get the sort of
little zigzag effects because actually
this letter T here is moving
horizontally so each time is captured
the lines of the different . so when you
interlock them you get that sort of
carry me effect on the edge
it's a pain how do you display it
properly you do ask difficult questions
so what do you have to do well first of
all you need to think about it not as
being a single frame that you interlace
by together but actually being separate
crimes if we have the lines along here
and we have time along here we got zero
there we got four field one here feel to
hear field three here field for field
five on points0 we're capturing let's
say we do this with the odd ones we
capture these lines here at it . one
actually capturing the bits in the
middle catching that bit capturing that
bit capturing that bit capturing that
bit capturing that bit
character in that bit then it . to we're
capturing the there and so on . through
were capturing hear it hear what they're
trying to do with interlaced is to
reduce the amount of information they
need to transmit so you could think of
it perhaps a bit like a sort of early
analog compression system been like mp3
reduces the amount of information needs
to be store to store some audio
interfaces doing the same thing with
videos reducing the amount of
information but hopefully it's not
throwing away anything that you're going
to see something static like a video
into this
book very little is lost by transmitting
in an interlaced form over a
non-interlaced from progressive form was
it be called would still see all that
pretty much all the detail we see
between the two so we reduce the amount
of information you transmit by half but
we're still effectively transmitting
what looks the same to the end user when
they're viewing it on the television
screen
certainly the time when this was
developed but we're still free
information where every sample . before
anyway half the information we could
possibly have captured there and
actually what we're throwing away is the
separation between vertical resolution I
how much detail we can represent and
also temporal resolution what we've got
here is this is a single capture . 2.0
we capture all the odd lines . when we
capture all the even lines now think
about something like the piece of paper
i am capturing here 2.0 I capture this
line here which is white and this launch
is white and so on all the way down so
effectively what we captured each point
on here is a completely white field at
this point though we capture this line
which is green we capture this line
which is also green and we capture this
line which is also green and at this
point in time we capture completely
greenfield the next point in time we go
back and capture completely white field
and then we capture completely
greenfield and if you display this what
you would see would not be a series of
watching green lines but actually the
image flashing between white and green
we've got to a situation where yes we've
reduced the amount of information that
we need to transmit we've also
manipulated the information so that we
cannot distinguish between
high-resolution vertical information
it's eerie just to help try to help out
yet
I'm sorry seriously you can't do a
computer file yet so effectively what
we've got is we've mapped both the
high-frequency vertical information and
also temporal information into the same
part of the encoding of the information
and so there's no easy way to
distinguish between the two so high
frequency information like this
oscillating white-and-green pattern is
indistinguishable once we've interlaced
it from flashing white and green screen
there's just no way you can deal with
that the way you get round this is
inside the camera when you sample the
information you filter it vertically so
that you don't have the high-resolution
image that so effectively an interlaced
camera let's go slightly lower vertical
resolution and the progressive woman had
but probably around seventy percent of
the vertical information is still there
still better than you'd get if you are
you transmitted a smaller number of
lines every frame so it gives you the
benefit but this still comes down to
think how on earth do we display
interlace material on someone like a
computer which has an inherently
progressive display without getting all
the sort of it's exactly patterns
well it's not an easy it's not an easy
problem to Seoul it can be in some
situations so there's some of the
situations where it can be really easy
to deal with for example film translated
onto a video film shot at 25 frames that
we talked about in the other one when
that is transferred onto video tape is
that you take the piece of film you scan
the ordinal bloodlines transmit that the
fields can't even numbered lines
transmitted as a field then we want to
the next frame is actually knows . the
two fields do come from the same point
in time so actually the best way to get
them back together is literally to weave
them together though and you get the
same film praying that you started with
all the representation of what you get
all the detail so that's the best way to
do it for fill material for something
like videos we've seen on the computer
screen that doesn't work what you need
to do is actually
only have information that should be it
. 0 displayed there and I've only have
information . one displayed here and
then you have information . to displayed
here however you go about that well this
several ways you could do it you could
just say well okay I know it's gonna
fill in this gap here
so what I'll do is I'll interpolate
between line worn on line three and work
out what line to would be so i'll just
use the information inside . of time and
just create a sort of that's the thing
there and then i'll do the same between
three and five between five and seven
and so on the problem that happens here
is that you immediately reduce the
resolution down to only being as high as
the number of lines you've got in the
field so is there any way you can do
this well yeah basically what you want
to do is to generate the information
that would have been there if the camera
captured at that point and the way you
can do that is not just use the line
above the line below but also realize
that you know what was in that . 50-50
split second before and you could also
know what's in that . 15-second in the
future if you delayed everything by a
single field so what we doing instead of
actually trying to generate this frame
here at point2 we store it digitally we
can do the computer quite easily until
we get 2.3 at which point we've already
seen this information so we can use that
we've got this information was just
arrived and we got these two bits of
information and we can combine all that
together to generate the data that
should be at this point and the data
that should be at this point and so on
so we delay the video by field or two
fields and said that we now have more
information we got something that's in
the future from the point where
generating we've got the information in
the past we've already seen and the
information that's on different lines as
well and we can combine all that in
various different complicated algorithms
to generate the information and the more
expensive your equipment the clever the
algorithm will be and the better it will
do and so if we were to enable one of
them on our computer and VLC which is
what I'm using here is got several
built-in then i'm going to enable yet
another deinterlacing algorithm and
if I turn it on and you will see that
suddenly instead of being zigzaggy it
goes back to being straightforward text
most tvs these days are LCD panels on
the younger or whatever maybe plasma so
are they doing this kind of stuff all
the time that or do they use a different
method
no that's why they're doing this inside
every LCD you'll get there will be a
chip which will be taking the video and
processing it to produce progressive
video from the interlaced video that's
coming in and depending on how much you
pay for that chip but it's a 50p chip or
75p chip and that 50 p is the price not
the resolution is a video is producing
it depends 250 pounds chip or fifty
cents because it's probably worth more
these days then you are going to get a
better quality out with hope you have on
this reprogramming but that cause even
more money
that's why we get ugly lines and that's
how you essentially fix them know that
you didn't think it was gonna be that
simple did you all right let's go back
to being so we the video signal coming
is a series of annual screen and then so
we got a bit later on we move it up show
the next one and so on and so by doing
that fast enough you get the appearance
of motion
