In short — Authors have raised global awareness on recent NLP trends and have urged researchers, developers, and practitioners associated with language technology to take a holistic and responsible approach.
Most notable NLP trend — the ever-increasing size (based on its number of parameters and size of training data) of…
Back in mid-2019, I was offered teaching assistantship positions at two different institutes. They were mostly distinct, barring few areas. Both had a similar motive to provide state-of-the-art knowledge to their paid learners. I knew it would be difficult for me to juggle both with my existing job. …
Have you watched this?
Watch these videos as per your convenience and go through this blog to understand (without much mathematical sophistication) how we arrived at this stage.
For Data Scientists, Notebooks have become the ‘de facto’ tool while working on a project. Whether they perform EDA on the initially available dataset or begin with some data preprocessing steps or experiment with different models and libraries, notebook is the one they begin with. …
Until 2014, CNN architectures had a standard design:
1. Stacked convolutional layers with ReLU activations
2. Optionally followed by contrast normalization and max-pooling and dropouts to address the problem of overfitting
3. Followed by one or more fully connected layers at the end
Which three concepts lie at the heart of deep learning? — Loss Functions, Optimization algorithms, and Backpropagation. Without them, we would have been
conceptually stuck at the perceptron model of the 1950s.
If you want to learn about AlexNet, check out this blog where I have extensively covered it
Here is an example notebook, in which I have imported a pre-trained AlexNet model from PyTorch Library and used it for classifying an image.
Feel free to play around and discuss.
If you want to explore a bit more on AlexNet, go through my blog on ZFNet.
Although it was an updated version of AlexNet, the paper contributed towards our in-depth understanding of CNN architectures.
Originally published at https://dev.to on August 5, 2020.
Each year ILSVRC winners conveyed some interesting insights and 2014 was special in that regard. For most years the challenge tasks were:
ZFNet was introduced in the paper titled Visualizing and Understanding Convolutional Networks by Matthew D. Zeiler and Rob Fergus. This architecture did not win the competition, but its inference was implemented by the winner of that year ( Clarifai founded by Zeiler, 11.19% test error). This paper is remarkable because…
AlexNet was introduced in the paper, titled ImageNet Classification with Deep Convolutional Networks, by Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton and since then it has been cited around 67000 times and is widely considered as one of the most influential papers published in the field of computer vision. …