Create a bucket on S3 to store all the indexed 
faces. 
For each person, create a separate folder to 
store the images with their faces. 
Upload images with the person's images to the 
folder. 
To add another person, repeat the steps. 
Go to AWS IAM to add a user with access to 
S3 and Rekognition.
Go to my GitHub and download the two python 
files. 
Open the downloaded py files and fill in the 
bucket names, ID keys/secrete, collection 
name and directory. 
Transfer the two files to Raspberry Pi, in the 
same directory you defined in the python file. 
SSH to Rasberry Pi, go to the folder where you 
saved the two python files. 
Run the index_face.py file to create indexed 
faces. 
You can now run the match_faces.py file to 
recognize faces. 
The match_faces.py will preview images from 
the camera and take photos every 10 seconds. 
Here I am using my tablet to play two photos, 
Einstein and Newton, and you can see the 
preview from the raspberry pi. 
When Einstein's photos are recorded, python 
will print the matched name is Einstein and the 
similarity and confidence. 
And the same for Netwon. 
You can also index your photos and Raspberry 
Pi will recognize you. 
