Companies Home Search Profile

2024 Deployment of Machine Learning Models in Production

Focused View

Laxmi Kant KGP Talkie

9:36:32

18 View
  • 1 - Welcome.mp4
    05:11
  • 2 - Introduction.mp4
    04:21
  • 3 - DO NOT SKIP IT Download Working Files.html
  • 3 - Sentiment-Classification-using-BERT.zip
  • 4 - What is BERT.mp4
    07:00
  • 5 - What is ktrain.mp4
    05:00
  • 6 - Going Deep Inside ktrain Package.mp4
    04:35
  • 7 - Notebook Setup.mp4
    02:19
  • 8 - Must Read This.html
  • 9 - Installing ktrain.mp4
    04:22
  • 10 - Loading Dataset.mp4
    04:44
  • 11 - TrainTest Split and Preprocess with BERT.mp4
    08:17
  • 12 - BERT Model Training.mp4
    10:23
  • 13 - Testing Fine Tuned BERT Model.mp4
    04:42
  • 14 - Saving and Loading Fine Tuned Model.mp4
    06:18
  • 15 - Fine-Tuning-BERT-for-Disaster-Tweets-Classification.zip
  • 15 - Resources Folder.html
  • 16 - BERT Intro Disaster Tweets Dataset Understanding.mp4
    11:25
  • 17 - Download Dataset.mp4
    04:33
  • 18 - Target Class Distribution.mp4
    07:33
  • 19 - Number of Characters Distribution in Tweets.mp4
    11:54
  • 20 - Number of Words Average Words Length and Stop words Distribution in Tweets.mp4
    07:05
  • 21 - Most and Least Common Words.mp4
    07:09
  • 22 - OneShot Data Cleaning.mp4
    05:07
  • 23 - Disaster Words Visualization with Word Cloud.mp4
    05:28
  • 24 - Classification with TFIDF and SVM.mp4
    09:05
  • 25 - Classification with Word2Vec and SVM.mp4
    09:54
  • 26 - Word Embeddings and Classification with Deep Learning Part 1.mp4
    09:08
  • 27 - Word Embeddings and Classification with Deep Learning Part 2.mp4
    11:43
  • 28 - BERT Model Building and Training.mp4
    08:35
  • 29 - BERT Model Evaluation.mp4
    10:04
  • 30 - DistilBERT-App.zip
  • 30 - Sentiment-Classification-using-DistilBERT.zip
  • 30 - What is DistilBERT.mp4
    09:11
  • 31 - Notebook Setup.mp4
    04:52
  • 32 - Data Preparation.mp4
    08:14
  • 33 - DistilBERT Model Training.mp4
    07:49
  • 34 - Save Model at Google Drive.mp4
    04:30
  • 35 - Model Evaluation.mp4
    03:21
  • 36 - Download Fine Tuned DistilBERT Model.mp4
    01:21
  • 37 - Flask App Preparation.mp4
    01:40
  • 38 - Run Your First Flask Application.mp4
    07:25
  • 39 - Predict Sentiment at Your Local Machine.mp4
    05:11
  • 40 - Build Predict API.mp4
    09:36
  • 41 - Deploy DistilBERT Model at Your Local Machine.mp4
    13:29
  • 42 - Create AWS Account.mp4
    06:50
  • 43 - Create Free Windows EC2 Instance.mp4
    05:45
  • 44 - Connect EC2 Instance from Windows 10.mp4
    07:23
  • 45 - Install Python on EC2 Windows 10.mp4
    03:02
  • 46 - Must Read This.html
  • 47 - Install TensorFlow 2 and KTRAIN.mp4
    10:36
  • 48 - Run Your First Flask Application on AWS EC2.mp4
    07:44
  • 49 - Transfer DistilBERT Model to EC2 Flask Server.mp4
    03:57
  • 50 - Deploy ML Model on EC2 Server.mp4
    11:44
  • 51 - Make Your ML Model Accessible to the World.mp4
    11:38
  • 52 - Install Git Bash and Commander Terminal on Local Computer.mp4
    07:13
  • 53 - Create AWS Account.mp4
    06:50
  • 54 - Launch Ubuntu Machine on EC2.mp4
    04:31
  • 55 - Connect AWS Ubuntu Linux from Windows Computer.mp4
    05:50
  • 56 - Install PIP3 on AWS Ubuntu.mp4
    05:04
  • 57 - Update and Upgrade Your Ubuntu Packages.mp4
    02:28
  • 58 - Must Read This.html
  • 59 - Install TensorFlow 2 KTRAIN and Upload DistilBert Model.mp4
    11:09
  • 60 - Create Extra RAM from SSD by Memory Swapping.mp4
    10:19
  • 61 - Deploy DistilBERT ML Model on EC2 Ubuntu Machine.mp4
    08:32
  • 62 - NGINX Introduction.mp4
    04:45
  • 62 - NGINX-uWSGI-and-Flask-Installation-Guide-Jupyter-Notebook.zip
  • 63 - Virtual Environment Setup.mp4
    06:10
  • 64 - Setting Up Flask Server.mp4
    05:54
  • 65 - NGINX Running Flask Application.mp4
    08:12
  • 66 - NGINX Running uWSGI Application.mp4
    05:58
  • 67 - Configuring uWSGI Server.mp4
    04:19
  • 68 - Start API Services at System Startup.mp4
    06:49
  • 69 - Configuring NGINX with uWSGI and Flask Server.mp4
    10:05
  • 70 - Congrats You Have Deployed ML Model in Production.mp4
    15:28
  • 71 - FastText-App.zip
  • 71 - FastText-Multi-Label-Text-Classification.zip
  • 71 - NGINX-uWSGI-and-Flask-Installation-Guide-Jupyter-Notebook.zip
  • 71 - What is MultiLabel Classification.mp4
    07:46
  • 72 - FastText Research Paper Review.mp4
    14:22
  • 73 - Notebook Setup.mp4
    06:39
  • 74 - Data Preparation.mp4
    12:02
  • 75 - FastText Model Training.mp4
    06:14
  • 76 - FastText Model Evaluation and Saving at Google Drive.mp4
    04:26
  • 77 - Creating Fresh Ubuntu Machine.mp4
    08:14
  • 78 - Setting Python3 and PIP3 Alias.mp4
    05:54
  • 79 - Creating 4GB Extra RAM by Memory Swapping.mp4
    03:33
  • 80 - Making Your Server Ready.mp4
    06:13
  • 81 - Preparing Prediction APIs.mp4
    12:33
  • 82 - Testing Prediction API at Local Machine.mp4
    06:08
  • 83 - Testing Prediction API at AWS Ubuntu Machine.mp4
    08:13
  • 84 - Configuring uWSGI Server.mp4
    06:15
  • 85 - Deploy FastText Model in Production with NGINX uWSGI and Flask.mp4
    07:11
  • Description


    Deploy ML Model with BERT, DistilBERT, FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2

    What You'll Learn?


    • You will learn how to deploy machine learning models on AWS EC2 using NGINX as a web server, FLASK as a web framework, and uwsgi as a bridge between the two.
    • You will learn how to use fasttext for natural language processing tasks in production, and integrate it with TensorFlow for more advanced machine learning
    • You will learn how to use ktrain, a library built on top of TensorFlow, to easily train and deploy models in a production environment.
    • You will gain hands-on experience in setting up and configuring an end-to-end machine learning production pipeline using the aforementioned technologies.
    • You will learn how to optimize and fine-tune machine learning models for production use, and how to handle scaling and performance issues.
    • Complete End to End NLP Application
    • How to work with BERT in Google Colab
    • How to use BERT for Text Classification
    • Deploy Production Ready ML Model
    • Fine Tune and Deploy ML Model with Flask
    • Deploy ML Model in Production at AWS
    • Deploy ML Model at Ubuntu and Windows Server
    • DistilBERT vs BERT
    • You will learn how to develop and deploy FastText model on AWS
    • Learn Multi-Label and Multi-Class classification in NLP

    Who is this for?


  • Machine learning engineers who want to gain hands-on experience in setting up and configuring an end-to-end machine learning production pipeline.
  • Data Science enthusiastic to build end-to-end NLP Application
  • Data scientists who want to learn how to deploy their machine learning models in a production environment.
  • Developers who are interested in using technologies such as AWS, NGINX, FLASK, uwsgi, fasttext, TensorFlow, and ktrain to deploy machine learning models in production.
  • Individuals who want to learn how to optimize and fine-tune machine learning models for production use.
  • Professionals who want to learn how to handle scaling and performance issues when deploying machine learning models in production.
  • anyone who wants to make a career in machine learning and want to learn about the production deployment.
  • anyone who wants to learn about the end-to-end pipeline of machine learning models from training to deployment.
  • anyone who wants to learn about the best practices and techniques for deploying machine learning models in a production environment.
  • What You Need to Know?


  • Introductory knowledge of NLP
  • Comfortable in Python, Keras, and TensorFlow 2
  • Basic Elementary Mathematics
  • More details


    Description

    Welcome to "Deploy ML Model with BERT, DistilBERT, FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2"! In this course, you will learn how to deploy natural language processing (NLP) models using state-of-the-art techniques such as BERT and DistilBERT, as well as FastText, in a production environment.

    You will learn how to use Flask, uWSGI, and NGINX to create a web application that serves your machine-learning models. You will also learn how to deploy your application on the AWS EC2 platform, allowing you to easily scale your application as needed.

    Throughout the course, you will gain hands-on experience in setting up and configuring an end-to-end machine-learning production pipeline. You will learn how to optimize and fine-tune your NLP models for production use, and how to handle scaling and performance issues.

    By the end of this course, you will have the skills and knowledge needed to deploy your own NLP models in a production environment using the latest techniques and technologies. Whether you're a data scientist, machine learning engineer, or developer, this course will provide you with the tools and skills you need to take your machine learning projects to the next level.

    So, don't wait any longer and enroll today to learn how to deploy ML Model with BERT, DistilBERT, and FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2!


    This course is suitable for the following individuals

    1. Data scientists who want to learn how to deploy their machine learning models in a production environment.

    2. Machine learning engineers who want to gain hands-on experience in setting up and configuring an end-to-end machine learning production pipeline.

    3. Developers who are interested in using technologies such as NGINX, FLASK, uwsgi, fasttext, TensorFlow, and ktrain to deploy machine learning models in production.

    4. Individuals who want to learn how to optimize and fine-tune machine learning models for production use.

    5. Professionals who want to learn how to handle scaling and performance issues when deploying machine learning models in production.

    6. anyone who wants to make a career in machine learning and wants to learn about production deployment.

    7. anyone who wants to learn about the end-to-end pipeline of machine learning models from training to deployment.

    8. anyone who wants to learn about the best practices and techniques for deploying machine learning models in a production environment.

    What you will learn in this course

    1. I will learn how to deploy machine learning models using NGINX as a web server, FLASK as a web framework, and uwsgi as a bridge between the two.

    2. I will learn how to use fasttext for natural language processing tasks in production and integrate it with TensorFlow for more advanced machine learning models.

    3. I will learn how to use ktrain, a library built on top of TensorFlow, to easily train and deploy models in a production environment.

    4. I will gain hands-on experience in setting up and configuring an end-to-end machine-learning production pipeline using the aforementioned technologies.

    5. I will learn how to optimize and fine-tune machine learning models for production use, and how to handle scaling and performance issues.

    All these things will be done on Google Colab which means it doesn't matter what processor and computer you have. It is super easy to use and plus point is that you have Free GPU to use in your notebook.

    Who this course is for:

    • Machine learning engineers who want to gain hands-on experience in setting up and configuring an end-to-end machine learning production pipeline.
    • Data Science enthusiastic to build end-to-end NLP Application
    • Data scientists who want to learn how to deploy their machine learning models in a production environment.
    • Developers who are interested in using technologies such as AWS, NGINX, FLASK, uwsgi, fasttext, TensorFlow, and ktrain to deploy machine learning models in production.
    • Individuals who want to learn how to optimize and fine-tune machine learning models for production use.
    • Professionals who want to learn how to handle scaling and performance issues when deploying machine learning models in production.
    • anyone who wants to make a career in machine learning and want to learn about the production deployment.
    • anyone who wants to learn about the end-to-end pipeline of machine learning models from training to deployment.
    • anyone who wants to learn about the best practices and techniques for deploying machine learning models in a production environment.

    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Category
    Laxmi Kant KGP Talkie
    Laxmi Kant KGP Talkie
    Instructor's Courses
    I am AVP, Data Science at Join Ventures, and have been Ph.D. Scholar at the Indian Institute of Technology (IIT), Kharagpur. I also co-founded a company, mBreath Technologies. I have 8+ years of experience in data science, team management, business development, and customer profiling. I have worked with startups and MNCs. You can join me at my YouTube channel KGP Talkie.
    Students take courses primarily to improve job-related skills.Some courses generate credit toward technical certification. Udemy has made a special effort to attract corporate trainers seeking to create coursework for employees of their company.
    • language english
    • Training sessions 80
    • duration 9:36:32
    • English subtitles has
    • Release Date 2024/04/23