Machine Learning Explained
Machine Learning Explained
  • Видео 92
  • Просмотров 324 202
What to Learn for Deep Learning? | 3 Roadmaps for Beginners
If you are a total beginner in deep learning and you are motivated, check out this video to get started with an efficient learning framework!
# Table of Content
- Introduction: 0:00
- Usual Roadmap: 0:56
- Issue with Said Roadmap: 1:43
- Best Roadmap: 3:31
- Set a Goal: 4:40
- Step 1: 5:40
- Step 2: 6:58
- Step 3: 8:22
- Machine Learning Engineer Roadmap: 10:11
- Researcher Roadmap: 12:48
- AI Product Roadmap: 15:52
- Conclusion: 18:34
Cool notebooks from the creator of Fast.AI you should check out:
📌 Image Recognition of Birds: www.kaggle.com/code/jhoward/is-it-a-bird-creating-a-model-from-your-own-data
Do check out this playlist to get started with the learning framework:
📌 ruclips.net/p/PLKy51kKbOLlz2...
Просмотров: 354

Видео

What are Pooling Layers in Deep Neural Networks?
Просмотров 4,9 тыс.17 часов назад
You might be wondering, why we even need to do pooling in the first place for CNN. In this tutorial, we'll answer this question by exploring the three ways in which pooling is helpful for training deep vision models. # Table of Content - Introduction: 0:00 - 3 reasons to use pooling layers: 0:29 - What is pooling: 0:50 - pooling for dimensionality reduction: 3:48 - pooling for translational inv...
Network in Network Deep Neural Network Explained with Pytorch
Просмотров 433День назад
Network in Network (NiN) is a very influential paper that introduced two massively useful concepts to the next generation of deep neural network architecture. Namely, 1x1 convolution throughout the network and global average pooling for the classification layer. # Table of Content - Introduction: 0:00 - 1x1 convolution: 0:49 - Global Average Pooling: 2:33 - Datasets: 3:22 - Results: 3:43 - Code...
How to Set Up your Deep Learning Environment?
Просмотров 139День назад
Getting started with deep learning doesn't have to be complicated. The best way to set yourself up is to go on Colab or Kaggle and make your analysis from a notebook. # Table of Content - Use Online Notebooks: 0:00 - Main Benefit of Notebooks: 1:14 - Notebook Anatomy: 1:53 - Pros of Notebooks: 5:52 - Cons of Notebooks: 6:09 - Conclusion Use Notebooks: 7:29 Honestly, getting started is the one s...
Highway Networks - Deep Neural Network Explained
Просмотров 23114 дней назад
Highway Networks are a type of network inspired by LSTM that make use of learnable information highway to let inputs flow unimpeded to subsequent layers. They have a lot of similarities with residual neural networks and offer deep insight into how to make training deeper neural networks possible. # Table of Content - Introduction: 0:00 - Degradation Problem: 0:38 - Idea Behind Highway Networks:...
What are 1x1 Convolutions in Deep Learning?
Просмотров 2,4 тыс.14 дней назад
You might have come across 1x1 convolution in deep learning architecture and wondered why they were there. In this tutorial, I'll walk you through their usefulness and how they compare to other dimensionality reduction techniques. # Table of Content - Introduction: 0:00 - 1x1 in networks: 0:11 - Convolutions: 1:40 - How to reduce dimensionality: 2:48 - What is 1x1 convolution doing?: 4:25 - Poo...
Inception Net [V1] Deep Neural Network - Explained with Pytorch
Просмотров 34821 день назад
GoogLeNet or Inception V1 is an interesting 22-layer deep neural network that made heavy use of 1x1 convolution in order to construct a competitive deep network in 2014. In this tutorial, we’ll take a look at it’s usage in Pytorch with CIFAR-10, a step-by-step explanation of the paper “Going Deeper with Convolutions” as well as a walkthrough of the Pytorch implementation. #Tabl of Content - Int...
FractalNet Deep Neural Network Explained
Просмотров 51328 дней назад
FractalNet is an alternative to residual neural networks that was built to showcase that it wasn't the residual aspect of resnet that made it so powerful. Indeed, the FractalNet paper was able to show that it was the training of sub-paths of different lengths that made the network much more performant than VGG. In this tutorial, we'll go through the paper and a Pytorch implementation of Fractal...
What is VGG in Deep Learning?
Просмотров 2,7 тыс.Месяц назад
Depth in neural networks is a very important parameter. Before ResNet, the VGG network was able to prove its importance by scaling up AlexNet to 16-19 layers. In this tutorial, we'll take a look at the theory behind the architecture as well as a Pytorch implementation from the official documentation. # Table of Content - The Importance of Depth in Neural Networks: 0:00 - VGG Network Architectur...
Stochastic Depth for Neural Networks Explained
Просмотров 1,1 тыс.Месяц назад
Stochastic depth is a powerful regularization method applied on residual neural networks that speed up training and enhances test performance. In this tutorial, we'll go through the methodology and a Pytorch implementation. # Table of Content - Introduction: 0:00 - Background and Context: 0:30 - Questions: 2:46 - Architecture Changes: 3:00 - Why use the resnet architecture for stochastic depth?...
DenseNet Deep Neural Network Architecture Explained
Просмотров 1,7 тыс.Месяц назад
DenseNets are a variation on ResNets that swap the identity addition for concatenation operations. This has many benefits, mainly better performance for smaller parameter sizes. In this video, we'll discuss this architecture and do a full Pytorch implementation walkthrough. Table of Content - Introduction: 0:00 - Background and Context: 0:26 - Architecture: 3:25 - Data Set: 7:29 - Main Results:...
How to Run ML models on the Browser?
Просмотров 1592 месяца назад
Running ML models in the browser is now pretty easy with Transfomer.js. In this tutorial, I walk through a very simple JS app that does object recognition client-side! Code: github.com/yacineMahdid/transformer-js-test-app # Table of Content - Introduction: 0:00 - Example App: 0:32 - index.html: 1:44 - style.css: 2:42 - index.js: 3:20 - file upload event listener: 5:07 - detect function: 6:02 - ...
ResNet Deep Neural Network Architecture Explained
Просмотров 2,1 тыс.2 месяца назад
ResNets are a very useful deep neural network architecture that excel at computer vision. Here we will discuss the core insight from the architecture and look at a Pytorch implementation! Table of Content - Introduction: 0:00 - Background and Context: 1:18 - Data Set: 5:28 - Architecture: 6:12 - Main Results: 10:41 - Limitation: 12:58 - Pytorch Walkthrough: 13:58 - High-Level Pytorch API: 15:12...
Img2Vec and UMAP to Visualize High Dimensional Data
Просмотров 4152 месяца назад
A very cool visualization trick that I’ve used recently for vision classification: image → embedding →UMAP projection→ dynamic visualization 💡 Notebook for this project: 📌 www.kaggle.com/code/yacine0101/aerospace-engine-defect-detection # Table of Content - Introduction: 0:00 - Project Overview: 0:30 - Data Overview: 2:45 - Getting Embeddings with Img2Vec: 4:12 - Reducing Dimensionality with UM...
Cosine Similarity for Data Science Tutorial
Просмотров 4193 месяца назад
Cosine similarity is a measure of how two multi-dimensional data points are alike. We'll explore the metric along with a Python example in this tutorial. # Table of Content - Introduction: 0:00 - Context: 0:24 - Example of Cosine Similarity Usage - Graph: 1:27 - Example of Cosine Similarity Usage - NLP: 2:44 - Theory: 3:45 - Formula: 5:15 - Kaggle Notebook Overview: 6:34 - Python Implementation...
LLMs Learn Tools at 775M Parameters!
Просмотров 7245 месяцев назад
LLMs Learn Tools at 775M Parameters!
Solving Boggle using AI : Dynamic Programming + Trie in Python
Просмотров 64211 месяцев назад
Solving Boggle using AI : Dynamic Programming Trie in Python
What is Data Science? - Introduction to Data Science with Python Workshop
Просмотров 372Год назад
What is Data Science? - Introduction to Data Science with Python Workshop
How to Solve Programming Problem | 3 Steps Framework
Просмотров 369Год назад
How to Solve Programming Problem | 3 Steps Framework
How to Start a Data Science Project?
Просмотров 6082 года назад
How to Start a Data Science Project?
How to Ask for Help in your Data Science Project!
Просмотров 2472 года назад
How to Ask for Help in your Data Science Project!
Shannon Entropy from Theory to Python
Просмотров 5 тыс.2 года назад
Shannon Entropy from Theory to Python
Handling Negative Result in Data Science Project!
Просмотров 3402 года назад
Handling Negative Result in Data Science Project!
Fact Check and Document All Data Science Assumptions ✅
Просмотров 1332 года назад
Fact Check and Document All Data Science Assumptions ✅
Kolmogorov Complexity With DNA Data in Python
Просмотров 8992 года назад
Kolmogorov Complexity With DNA Data in Python
Why your Data Science Project Exist?
Просмотров 5662 года назад
Why your Data Science Project Exist?
Distance Metrics Explained and Visualized in Python
Просмотров 2,4 тыс.3 года назад
Distance Metrics Explained and Visualized in Python
Why you shouldn't use K-Fold Cross Validation.
Просмотров 4463 года назад
Why you shouldn't use K-Fold Cross Validation.
K-Nearest Neighbor from Scratch in Python
Просмотров 2,3 тыс.3 года назад
K-Nearest Neighbor from Scratch in Python
How to Structure Data Science Project
Просмотров 6 тыс.3 года назад
How to Structure Data Science Project

Комментарии

  • @Param3021
    @Param3021 6 дней назад

    Too underrated video! Thank you so much for this video, now my vision is clear and I have set my goal - ML Engineer Job. Will soon comment here back, when I get a job/internship!

    • @machinelearningexplained
      @machinelearningexplained 6 дней назад

      Cool stuff, do you have a target company in mind? I could make you a more detailed roadmap.

  • @thouys9069
    @thouys9069 6 дней назад

    I still don't get why it's supposed to help translational invariance. As you say, the convolution is already capable of that. If I move the image contents 50 pixels to the right, the features should also move 50 pixels, given stride 1 and padding. Exactly the same is true for traditional sobel edge detectors. The edges don't change if I convolve the image with the edge detection filters translated or not

    • @machinelearningexplained
      @machinelearningexplained 6 дней назад

      No the convolution help with translational equivariance, but not with translational invariance. The idea of adding pooling is that you are forcing exact nearby pixel/feature information to be “lost”. This means that the network is forced to learn more generalizeable components of the picture you are showing it.

  • @simonvutov7575
    @simonvutov7575 7 дней назад

    Hey, great video!! Do you go to university in montreal? I'm from ottawa and will be going to Waterloo for computer engineering.

    • @machinelearningexplained
      @machinelearningexplained 7 дней назад

      Hey there, I was going at McGill yes! Cool stuff, best of luck in your computer engineering journey :)

  • @luisluiscunha
    @luisluiscunha 10 дней назад

    You made this concept so easy to understand: thank you!

  • @rafa_br34
    @rafa_br34 11 дней назад

    Very useful indeed, I just think the 3D representation could be a bit better (I'm used to seeing the filter rectangle behind the first layer and not on the side, but that's probably just me)

    • @machinelearningexplained
      @machinelearningexplained 11 дней назад

      True, it’s titled by about 90 degrees. Otherwise the 1x1 convolution wouldn’t fit well in the image I believe.

  • @himadrilabana8233
    @himadrilabana8233 11 дней назад

    Is this code for works only for same size of images ? Because it was giving while i used for my images

    • @machinelearningexplained
      @machinelearningexplained 11 дней назад

      Hey there, it’s been a while I’ve checked this code. What were your image size?

  • @pachecotaboadaandrejoaquin6727
    @pachecotaboadaandrejoaquin6727 16 дней назад

    Thank you for breaking it down so well! Keep up the excellent work!

    • @machinelearningexplained
      @machinelearningexplained 16 дней назад

      Hey thanks for the kind words! Will do, I have a few more videos ready for next week :) Do let me know if there is a specific topic or question you would like covered!

  • @HeyySujal
    @HeyySujal 18 дней назад

    I like your explanations, but Im watching it randomly As I'm beginner, where should I start?

    • @machinelearningexplained
      @machinelearningexplained 18 дней назад

      Glad you liked it. It’s the second time this week I had this request for where to start in deep learning, I’m setting up a video on that topic will publish soon!

  • @JuliusSmith
    @JuliusSmith 19 дней назад

    Shouldn't we call it a 1 x 1 x Cin convolution?

    • @machinelearningexplained
      @machinelearningexplained 19 дней назад

      That would indeed be a less confusing name for sure. That thing already have like 7 different names though haha

  • @VigneshBhaskar
    @VigneshBhaskar 21 день назад

    Thanks a lot for the content. One small request. Can you pls reduce the background music volume next time?

    • @machinelearningexplained
      @machinelearningexplained 20 дней назад

      Hey there, Yes sorry for the inconvenience, I've improved the sound in my new videos :)!

  • @dgs1977
    @dgs1977 23 дня назад

    Very interesting! Thank you so much! :)

  • @qasimjan5258
    @qasimjan5258 26 дней назад

    I am totally new to machine learning. From where do I start?

  • @AbhishekSaini03
    @AbhishekSaini03 Месяц назад

    Thanks , how can we use VGG for 1D signal? Is it possible to use VGG for regression instead of classification, how?

    • @machinelearningexplained
      @machinelearningexplained Месяц назад

      Hmmm depends, what’s the 1D signal about? Is it visual?

    • @AbhishekSaini03
      @AbhishekSaini03 Месяц назад

      It’s acoustic signal.

    • @machinelearningexplained
      @machinelearningexplained Месяц назад

      Ah then no, VGG shouldn’t be your pick here. It was expressively designed for image classification. Take a look at the various model on PyTorch made specifically for audio signal: 📌 pytorch.org/audio/stable/models.html

    • @AbhishekSaini03
      @AbhishekSaini03 Месяц назад

      Can’t we change output layer, activation function to do regression?

    • @machinelearningexplained
      @machinelearningexplained Месяц назад

      Yes you can, but the internal of the model is tailor built for image. If you are able to express your 1D signal input as an image I would say it might be worth it to try. However, there are other models made specifically for audio.

  • @kukfitta2258
    @kukfitta2258 Месяц назад

    very cool thank you for the knowledge

  • @victorisrael6191
    @victorisrael6191 Месяц назад

    Glorious😳

  • @machinelearningexplained
    @machinelearningexplained Месяц назад

    Hey, fyi I had to reshoot some of the section on this video because I couldn’t stop saying Drop Path (from Fractal Net) instead of Stochastic Depth. There is still 1 wrong mention of drop path in there that I wasn’t able to fix haha That's what you get from reading two paper simultaneously!😅

  • @HasanRoknabady
    @HasanRoknabady Месяц назад

    thank you very much for your nice work can i have your slides?

    • @machinelearningexplained
      @machinelearningexplained Месяц назад

      Thanks, for sure! You can shoot me an email at mail@yacinemahdid.com and I'll send them to you.

  • @sandeepbhatti3124
    @sandeepbhatti3124 Месяц назад

    Thank you! Exactly what I needed.

  • @JamesColeman1
    @JamesColeman1 Месяц назад

    Nice work, but why was the first file overall a larger file to start? Character quantity?

    • @mprone
      @mprone Месяц назад

      Yes, but this shouldn't have happened. The first file contains 2M characters, while the second only 1M thus |file_2| = 2 * |file_1|. The author of the video wanted to have a first file with 500'000 "ab" (thus 1M chars) but that's not what he did.

  • @camelendezl
    @camelendezl 3 месяца назад

    Amazing video! Thanks!

  • @machinelearningexplained
    @machinelearningexplained 3 месяца назад

    Who the heck is Tanimoto

  • @DistortedV12
    @DistortedV12 4 месяца назад

    This is SUPER helpful. I was looking online for a good example using sklearn to no avail. Even asked ChatGPT and was led astray.

  • @JeffSzuc
    @JeffSzuc 4 месяца назад

    Thank you! this was exactly the explanation I needed

  • @Miami_adana09346
    @Miami_adana09346 5 месяцев назад

    Will please tell me how to do ndam optimization in matlab for deep learning

    • @machinelearningexplained
      @machinelearningexplained 5 месяцев назад

      Hey there! For sure, do you already have some code that I can take a look at ? Also, why are you using MATLAB?

  • @Yashchaudhary-be2bu
    @Yashchaudhary-be2bu 6 месяцев назад

    thank you so much it helped me in project lot

  • @akhtarbiqadri1
    @akhtarbiqadri1 7 месяцев назад

    can you give me the link to the exact jupyter notebook? I can't find the exact same jupyter notebook from the link that you provide on the description

  • @ufukthegreat0
    @ufukthegreat0 8 месяцев назад

    Appreciate it man thank you. This is golden.

  • @ghinwamasri5537
    @ghinwamasri5537 8 месяцев назад

    Thank you for your concise and clear explanation!

  • @Diekartoffel1
    @Diekartoffel1 9 месяцев назад

    If I'm not mistaken you only return the entropy for the last checked nucleotide: entropy = 0 for nucleotide in {'A', 'T', 'G', 'C', 'N'}: rel_freq = dna_sequence.count(nucleotide) / len(dna_sequence) if rel_freq > 0: entropy = entropy - (rel_freq * math.log(rel_freq, 2)) return entropy

  • @MalichiMyers
    @MalichiMyers 10 месяцев назад

    hi

    • @machinelearningexplained
      @machinelearningexplained 10 месяцев назад

      Hello 👋 Let me know if you have questions!

    • @MalichiMyers
      @MalichiMyers 10 месяцев назад

      ​@@machinelearningexplained Do you know a way to use the MNIST dataset without using sklearn, pytorch, or tensorflow? If not, what are some datasets that you recommend?

  • @yacine997
    @yacine997 11 месяцев назад

    This is a great project for beginners, thanks for this valuable information !

    • @machinelearningexplained
      @machinelearningexplained 11 месяцев назад

      Glad it was interesting! I remember the first time I've attempted to make a boggle solver, my DFS based algorithm was extremely inefficient. I had to build it in C and have parallelization in place to barely hit the 2min mark for the game. Once you use dynamic programming and a search-optimized trie structure it's a world of difference.

  • @haoduong6565
    @haoduong6565 Год назад

    Hi, can you share example on fine and gray modeling? Also, where I can get these codes? thanks!

    • @machinelearningexplained
      @machinelearningexplained 11 месяцев назад

      Hey there, Sorry for the wait this comment slipped through! What do you mean fine and gray modeling?

  • @mitchellshields9904
    @mitchellshields9904 Год назад

    promo sm

  • @yacine997
    @yacine997 Год назад

    Love this video !

  • @tuberclebacilli9417
    @tuberclebacilli9417 Год назад

    Where are you from?

    • @machinelearningexplained
      @machinelearningexplained Год назад

      I'm originally from Algeria, but I've lived in Canada for most of my life! btw you can join the discord for general casual chat : discord.gg/QpkxRbQBpf Easier for me to follow up!

  • @tuberclebacilli9417
    @tuberclebacilli9417 Год назад

    Keep going 💪💖

  • @tuberclebacilli9417
    @tuberclebacilli9417 Год назад

    Welcome back

  • @ishanmistry8479
    @ishanmistry8479 Год назад

    You're back again! That's amazing ✨

  • @sohailraza2005
    @sohailraza2005 Год назад

    Can you please help me with rainfall problem with deepxde and physics informed neural network

    • @machinelearningexplained
      @machinelearningexplained Год назад

      Yes sir for sure, shoot me a message on LinkedIn I have some time tomorrow for a video-call 📌 www.linkedin.com/in/yacinemahdid/

    • @sohailraza2005
      @sohailraza2005 Год назад

      @@machinelearningexplained please accept my request, and thank you sooooooooo much for replying me

  • @AJ-et3vf
    @AJ-et3vf Год назад

    So for nesterov gradient, the learning rate would have to be set constant and not found automatically using line search for each iteration?

  • @AJ-et3vf
    @AJ-et3vf Год назад

    Great video. Thank you

  • @AJ-et3vf
    @AJ-et3vf Год назад

    Great video. Thank you

  • @kevon217
    @kevon217 Год назад

    Very helpful. Thanks!

  • @Anastasiyofworld
    @Anastasiyofworld Год назад

    Nice tutorial, thanks a lot. But! If you explain something - just explain. Watching how you was moving the pieces of code was so annoying!

    • @machinelearningexplained
      @machinelearningexplained Год назад

      Hi there, Thanks for the feedback! Will improve that part in the next tutorial for sure :)

  • @Detective_Jones
    @Detective_Jones Год назад

    i'm so frustrated that i don't know *hope i can learn to code math formulas in future* if anyone got some guidance please give

    • @machinelearningexplained
      @machinelearningexplained Год назад

      Hey there Jonas, What do you mean? Is there a math formula in particular you are struggling with?

  • @Lorenzo8690
    @Lorenzo8690 Год назад

    thank you very much for the tutorials and code! But I don't quite understand why both AdaGrad and AdaDelta perform poorly for these examples?

    • @machinelearningexplained
      @machinelearningexplained Год назад

      Hey there, Glad it was useful! The example use a very very basic formula with which AdaGrad/Delta are way too overkill for. It was more to illustrate that we could code these formula and they can work in practice. To make them work well I would have had to tweak the hyperparameters some more. In a neural network though they are good optimizer!

    • @Lorenzo8690
      @Lorenzo8690 Год назад

      @@machinelearningexplained thank you very much, I used your work to create a colab file in which I animated the different algorithms exposed :)

    • @machinelearningexplained
      @machinelearningexplained Год назад

      @@Lorenzo8690 wow cool, shoot me a link I'll check it out!