Sri ram_654
still learning...

still learning...

Can We Make a computer to understand Elon Musk and TonyStark are different people?

Can We Make a computer to understand Elon Musk and TonyStark are different people?

Using transfer learning to image classification

Sri ram_654's photo
Sri ram_654
·Aug 8, 2022·

4 min read

Subscribe to my newsletter and never miss my upcoming articles

Play this article

Table of contents

  • Why ?🤷‍♂️
  • Conclusion

It's going to be fun Trust me 😂

Why ?🤷‍♂️

You see I wanted to do a project with Transfer Learning

But it also needs to be interesting for the humans who are going to read this

Then I suddenly got an idea

you see David(the goat) once Wisley said

humans are more likely drawn to famous people

So I can take that .... and apply my very very small knowledge of machine-learning

And Tada

An idea that no one ever thought of it before arose from the ground

What is Transfer Learning? Anyways 🤔

I covered this topic vaguely in my last blog satellite classification

But let's get our hands a little dirty this time

And if you are doing this, right now...

In the end, if you stick to it you will do this

so common let's get started, shall we?

Transfer Learning overview

pexels-roman-odintsov-5903451.jpg

You see the name described by itself is Transfer we are just going to transfer knowledge from one object to another

Here the object represents our AI model

Ok, but what is this have to do anything with pizza taken from the oven

In some contexts, it's related to transfer learning... I don't know

It's Danny's idea anyway

Ohh oh... I think we made Danny angry.. let's not talk about this again

Let's Train our Classy the ai model

pexels-pavel-danilyuk-8439069.jpg

First we need a data

I got it from google Just searched by Elon musk photos and tony stark photos

And got this 10 images for each classes(person's)

elon.PNG

tony.PNG

Problem with the normal approach

You see most of the time we build our model from the ground up to recognize the pattern between the given data

But a normal Dense layer that looks like this needs to have more images to recognize the patterns

Capture.PNG

so now we have this problem(or I intentionally created this problem) where there is no large data is available for our model to understand the patterns between the two faces(Elon Vs Tony)

This is where we get into the Transfer Learning part of this blog


Transfer Learning:

Let me explain our (Elon vs Tony) problem context:

We take this Mobilenet model from Tensorflow_Hub Which is one of the most popular websites to take models which is already learned from a large amount of dataset like ImageNet (Which had like a million images and thousand classes to find)

And we get that model and train it on our 10 images so it can easily and quickly understand our images by just looking at a very small amount of data

In short We only need a few images to train our AI Because it has already been trained on so many images(like our problem and more), that it know's how to learn the patterns in the given image quickly and efficiently with just a few image examples


How do humans identify a person's

pexels-ds-stories-9228408.jpg

You see, In Average We have 86 a billion neurons in our brain that work together to make a pattern of a person's face so we can remember them

If I wanted to make 86 billion neurons even close to a billion neurons it's going to take a while... I mean quite literally

So here we can use the transfer learning method to train our model on 10 images for each class and predict them with the accuracy of 95%

Code :

MobileNet_Url = "https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/classification/5"
train_dir = "/content/drive/MyDrive/Datasets/elonvsstark/"
class_names = ["elon","stark"]
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
im = ImageDataGenerator(rescale=1/255)
train_data = im.flow_from_directory(train_dir,batch_size=(32),target_size=(224,224),class_mode="categorical")
import tensorflow_hub as hub
mobilenet_layer = hub.KerasLayer(MobileNet_Url,trainable=False)
model = tf.keras.Sequential([mobilenet_layer,
                             tf.keras.layers.Dense(2,activation="softmax")])
model.compile(loss="categorical_crossentropy",
              optimizer="adam",
              metrics="accuracy")
model_history = model.fit(train_data,epochs=5,steps_per_epoch=len(train_data))

Output :

Capture_1.PNG

As we can see our Classy AI model got 95% accuracy by just looking at our 10 images 🔥


Lets see in action 😎


Prediction :

Capture_3.PNG

Capture_4.PNG

Capture_5.PNG

Let's Intentionally confuse our model because it's doing very good so far:🔥💥


Capture_6.PNG


Well, I'm going to leave it with you to judge our model on this one, what do you think of this prediction value (where both of the classes are present in the same picture)


Conclusion

As deep learning goes we don't have to always create our own model from scratch you just have to know when to use and where to use our model

Here is my code : @sriramgithub

My LinkedIn : don't click here

If you think you are stuck or I didn't explain this clearly enough

Just let me know

until then

Bye from Me and from Dany

Well, I think he is going to be busy...for another two days

Ok,then it's just me then

Bye, ( ̄︶ ̄)↗ 

 
Share this