32

Complete Architectural Details of all EfficientNet Models

 3 years ago
source link: https://towardsdatascience.com/complete-architectural-details-of-all-efficientnet-models-5fd5b736142?gi=f35d4914212e
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Let’s dive deep into the architectural details of all the different EfficientNet Models and find out how they differ from each other.

yeYZRjY.jpg!web

Photo by Joel Filipe on Unsplash

I was scrolling through notebooks in a Kaggle competition and found almost everyone was using EfficientNet as their backbone which I had not heard about till then. It is introduced by Google AI in this paper and they tried to propose a method that is more efficient as suggested by its name while improving the state of the art results. Generally, the models are made too wide, deep, or with a very high resolution. Increasing these characteristics helps the model initially but it quickly saturates and the model made just has more parameters and is therefore not efficient. In EfficientNet they are scaled in a more principled way i.e. gradually everything is increased.

zaaiAbN.png!web

Model Scaling. (a) is a baseline network example; (b)-(d) are conventional scaling that only increases one dimension of network width, depth, or resolution. (e) is our proposed compound scaling method that uniformly scales all three dimensions with a fixed ratio.

Did not understand what going on? Don’t worry you will once you see the architecture. But first, let’s see the results they got with this.

BZR7beF.png!web

Model Size Vs ImageNet accuracy

With considerably fewer numbers of parameters, the family of models are efficient and also provide better results. So now we have seen why these might become the standard pre-trained model but something’s missing. I remember an article by Raimi Karim where he showed the architectures of pre-trained models and that helped me a lot in understanding them and creating similar architectures.

As I could not find one like this on the net, I decided to understand it and create one for all of you.

Common Things In All

The first thing is any network is its stem after which all the experimenting with the architecture starts which is common in all the eight models and the final layers.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK