Hey 
guys and welcome back, today Im going to show
how to use AI face detection to create your
own parallax hologram on glass.
Wait, so what is a parallax hologram?
So, a parallax is the difference in the apparent
position of an object viewed along two different
lines of sight.
So, say If I have an object in front of me,
right.
if I view it from one angle I will see one
view of the object and if I shift my view
to this side I will see a different view of
the object.
And that is exactly effect that we are going
to be creating.
A parallax hologram is a technique that is
generated based on the perspective of the
viewer to create the effect of a hologram
So how does it works?
So, there is a couple of videos that explain
the parallax effect in depth.
The one video is from MIT called "The Full
Parallax 3D Computer Generated Hologram With
Occlusion Effect Using Ray Casting Technique"
–Wow!
Really long but worth watching
The other is by Manic Arts and he talks about
the Parallax Effect with respect to Unity
Game development.
It's a really great and intuitive explanation.
I’ll have the link to both videos in the
Comments
How to get Started
So for us to create this effect we are going
to be using face detection to estimate the
position of my face and thus move the camera
view proportionally
Okay so before we get into it remember to
have your project glass setup already, if
you are not sure what I am talking about,
then please check out my first video in the
playlist to learn how to set up your transparent
glass display.
You can follow along this tutorial with just
your PC and Webcam while you wait for your
smart glass set up to arrive.
Another thing, please don’t forget to Watch
right till the end, to find out about how
you can win one of 20 free enrolments to the
course of your choice including this one.
And, if you like this video, please give it
a thumbs up, comment down below if you have
any questions or any suggestions for future
videos.
Subscribe so you get notified when the next
AI/AR tutorial is out.
And feel free to share this video with any
of your friends if think it will be helpful
to them.
So let’s get right into in – Roll Intro!
ROLL INTRO
Okay before you get started Make sure you
have a new project created un Unity.
I have a new project called "Phase 2 Face
Detection" and also ensure that you have a
web cam handy.
Next up let’s get started with a couple
of items for the project.
We can head over to out Project Smart Glass
repo on GitHub – And you will find it at
this link github.com/reigngt09/Project-Smart-Glass.
Go ahead and Clone or Download the repo.
If you go into Phase 2 you will see 3 files,
and we will go in depth of what they all do.
But for now, the FlipHorizontal.cs script
flips our Unity game view camera along its
horizontal axis, because our projector is
place behind the glass and as a result the
image is inverted or flip.
So this script corrects this issue.
Next we have the Readme.MD file, which contains
the link to the main project files for Phase
2 and the RotatePerspective.cs script will
be discuss later in this video.
For now, let’s click on this link to download
the project files.
They are in a cloud file because the file
is a bit large to be uploaded onto GitHub.
OpenCV4Unity
Back in Unity, before we import our project
files, we will need to download and import
OpenCV4Unity from the Asset store, we actually
used this package quite intensively in our
Project EDITH series.
It is a paid Asset, but we will be using this
comprehensively in this course.
Click to Import the it and give it some time
for this process to complete.
Click Import again if it Prompts
Face 
Detection
Now we need to test the face detection module
and get it up and running.
For this type in dnn into the search field
and this will allow us to open up the setup_dnn_module
pdf.
This document provides the instructions for
setting up the ResnetSSD Face Detection Example.
Click on the links to download both of these
files.
Click to show in Explorer.
We need to copy and paste the StreamingAssets
folder to be directly under Assets.
Its really important that you do this otherwise,
the app will prompt you that it cannot find
the model.
Once you have done that, Copy and paste the
Model files into Assets>>streaming_assets>>dnn
With regards to the prototxt file, when it
opens, just copy all of the code into a new
text file and rename it to deploy.prototxt.
Running Face Detection
Go to Assets>>OpenCVForUnity>>Examples>>MainModules>>dnn>>ResnetSSDFaceDetectionExample
and here you shall find the example scene
that we will use for our project.
If you did everything correctly and run the
example, you should be able to detect your
face in real time.
Mine works at around 119FPS on my 1080 GTX
GPU, your framerate may vary.
Importing and Setup of the Project Files
If all went well, we can then go ahead and
import our project files that we have just
downloaded called "Phase 2 – ParallaxHologram.unitypackage"
Once imported, go over to Scenes>>Phase2Assets
and drag the Main Camera Prefab into the hierarchy.
You can go ahead and delete the Main Camera
and rename the new one.
In the main camera prefab, there are a couple
of items that need to be unpacked from the
main camera.
Right click on Main Camera and click on "Unpack
Prefab Completely" so we can change a few
things.
Now we can move our Car prefab out of Main
camera, so it will be a static object in the
scene.
Main Camera Scripts
Looking at the Main Camera we have a couple
of additional scripts.
The first one we discussed earlier which is
the flip Horizontal script which allows to
flip the game view horizontally.
I will show you what I mean in a bit.
We have a check box, which also helps us to
toggle our game view.
Next we have the main script that I wrote,
called Rotate Perspective and what this does
is that it controls our cameras position and
rotation to create the parallax effect.
For this to work, please ensure that your
camera is in Orthographic view to maintain
this effect.
If we play, and with the webcam aimed at our
face, we should be able to view this effect
quite nicely.
Now changing to Scene view, we can view exactly
whats happening and get an intuitive understanding
of what’s going on here.
Now we have an orthographic view of the car,
we can see how the angle of rotation around
the face of the car.
If we change the "Calibration_X constant",
you can see exactly how it influences the
effect.
If we put in 5, you can see how much the model
moves from its center axis.
Our goal is to minimize the movement of the
front end of the vehicle.
Now if we change this value to 50, you can
see the minimal angle of rotation that we
have.
The ideal value is 12.4 which I have derived
empirically.
I did not have much the time to do much for
the maths.
Over here we have already imported our quad,
so we can go ahead and delete this one.
Click Play.
To get rid of these GUI buttons, we can go
into the Canvas and menu and just disable
the menu for now.
Code
Lets now dive into the code where we can understand
what exactly is going on under the hood.
Open up the code in visual studio.
And first up we have our calibration constants
which we’ve discussed earlier, Now we need
two of them, one for the x-axis and the other
for the y-axis.
Next we have out input gameObject called facecube,
and essentially this is a 3D object that we
used to indicate the position of the face
from our OpenCV4Unity dnn script.
You can see here how the position of the green
bounding corresponds to the position of the
cube.
So yeah, that’s the position vector that
we can insert into any script.
Over here we have 3 vector3’s the First
one is the initial position of the faceCube,
which we can later map to the rotation and
position of the Main-Camera.
The Current Rotation Position is the continuous
and current position of the facecube.
And now both these Vector3 parameters are
subtracted from each other to get the variable
Diff_Rotation_Position, also know as the difference.
So on awake of the app we obtain the initial
position of the detected face.
Then the same is done in the Update function
to get the updated position of the face.
We then find the difference between the two
and then transform the main cameras position
and rotation.
Our calibration constants essentially just
adjust the output proportionally until we
achieve the parallax effect that we want.
For the transform.position, I just set it
to one.
But you can add a positional calibration constant
if you want to.
Running the App on Glass
Now back in Unity we run the app, and we can
test out this effect.
Because we are running of our PC, there’s
no devices to deploy to.
WE just need to make the Game window large
enough and extend our display to our glass
display and observe our Parallax Hologram
illusion.
So hope you enjoy that.
please comment down below if you have any
questions about Project Smart Glass and also
if you have any suggestions or ideas that
you would like for me to cover in future videos,
then please comment down below.
And, if you like this video, please give it
a thumbs up and Subscribe so you get notified
when the next AI/AR tutorial are out.
Lastly Feel free to share this video, it’ll
really show your support of the channel.
Phase 1
So we've covered phase 1 in previous video
and this effect is pretty much static, and
we can’t do much with it except for admire
various demo videos
Phase 2
We've just completed right nowand this is
where we use AI face detection in Unity to
control a 3D model to exhibit a parallax hologram,
and this is essentially a holographic illusion
created by perspective shifting.
Phase 3
which is the next lecture
and this is where we will be using the leap
motion for hand detection; and this is where
we will see how we can harmonize the transparent
display with natural interaction.
Phase 4
Will be using object detection.
Phase 5
Will enhance object detection but using Mask
RCNN
Phase 6
Pose estimation
and Phase 7 & 8
Will be bonus projects of Project Smart glass,
and this will be revealed closed the later
phases of this project.
Okay, so earlier I mentioned that if you are
interested in winning one of 20 free enrolments
to Augmented Startups courses, then all you
have to do is like and comment on this video
and sign up to the webinar at the link below.
Winners are announced every 3 months.
And you can comment about anything, or provide
suggestions on upcoming videos and ideas as
I’ve mentioned earlier.
So the full series Project Smart Glass course
will be available as a standalone course and
on Ultimate AI CV PRO.
So, the 3 most advanced Phases of this project
will NOT be available on Youtube but will
be exclusive to course on Augmented Startups.
The links are all down below.
If you are interested in Artificial Intelligence
in Computer Vision, we have a course that
teaches you AI object detection, object segmentation,
pose estimation, Android AI app development
along with the complete Project EDITH and
Project Smart Glass tutorial series..
Alright, Thank you for watching and we’ll
see you in the next lecture.
