Companies Home Search Profile

Build an AWS Machine Learning Pipeline for Object Detection

Focused View

Patrik Szepesi

16:17:24

132 View
  • 1. Lets look at our End Project.mp4
    12:28
  • 1. Source Code for the Course.html
  • 2. Setting up IAM User.mp4
    08:25
  • 3. Clarification about AWS S3.mp4
    01:13
  • 4. Getting Data for our Project.mp4
    04:13
  • 5. Getting dataset Part 1.mp4
    07:59
  • 6. Getting dataset Part 2.mp4
    04:33
  • 7. Getting dataset Part 3.mp4
    02:08
  • 8. Getting dataset Part 4.mp4
    00:33
  • 1. Create SageMaker Domain.mp4
    02:21
  • 2. Create SageMaker Studio Notebook.mp4
    01:35
  • 3. Learning how to Stop and Start SageMaker Notebooks.mp4
    05:25
  • 4. Restarting our SageMaker Studio Notebook Kernel.mp4
    01:27
  • 5. Upload and Extract Data in SageMaker.mp4
    07:35
  • 6. Deleting Unused Files.mp4
    01:45
  • 1. Loading and Understanding our Data.mp4
    08:31
  • 2. Counting total Images and getting Image ids.mp4
    16:37
  • 3. Getting Classname Identifier.mp4
    07:25
  • 4. Looking at Random Samples from our Dataframe.mp4
    11:59
  • 5. Understanding Annotations.mp4
    06:50
  • 6. Visualize Random Images Part 1.mp4
    09:50
  • 7. Visualise Random Images Part 2.mp4
    02:16
  • 8. Matplotlib difference between plt.show() and plt.imshow().html
  • 9. Visualising Multiples Images at Once.mp4
    15:21
  • 10. Correcting our Function.html
  • 11. Visualising Bounding Boxes Part 1.mp4
    04:50
  • 12. Visualising Bounding Boxes Part 2 (Theory Lesson).mp4
    16:26
  • 13. Visualising Random Images with Bounding Boxes Part 1.mp4
    15:39
  • 14. Wrong Print Statement.html
  • 15. Visualising Random Images with Bounding Boxes Part 2.mp4
    11:06
  • 16. Read this Lesson if you have issues with Data Visualization.html
  • 1. Clean our Train and Validation Dataframes.mp4
    09:02
  • 2. Split Dataframe into Test and Train.mp4
    05:16
  • 3. Get Images IDs.mp4
    11:21
  • 4. Splitting IDs Theory Lesson.mp4
    07:46
  • 5. Explanation Regarding Next video.html
  • 6. Moving Images to Appropriate Folders.mp4
    09:35
  • 7. Count how many Train and Test Images we have.mp4
    04:58
  • 8. Verifying that our Images have been moved Properly Part 1.mp4
    12:25
  • 9. Verifying that our Images have been moved Properly Part 2.mp4
    04:52
  • 1. Using Mxnet.mp4
    05:44
  • 2.1 RecordIO Reading.html
  • 2. Additional Info regarding RecordIO format.html
  • 3. Using Mxnet RecordIO.mp4
    14:10
  • 4. Correction Regarding Label width.html
  • 5. Preparing Dataframes to RecordIO format Part 1.mp4
    12:41
  • 6.1 37 Getting df into correct format part2.mov
    06:50
  • 6. Preparing Dataframes to RecordIO format Part 2.mp4
    06:50
  • 7. Moving Images To Correct Directory.mp4
    03:22
  • 8. Explanation Regarding the Previous Video.html
  • 9. Verifying that all Images have been Moved Properly.mp4
    03:34
  • 10. Read Before Proceeding to the next Lecture.html
  • 11. Creating Production .lst files (Optional).mp4
    21:52
  • 1. Data Augmentation Theory.mp4
    11:15
  • 2. Augmenting a Random Image.mp4
    07:37
  • 3. Moving Images to new Folder structure.mp4
    05:55
  • 4. Visualising Random Augmented Images Part 1.mp4
    13:49
  • 5. Visualising Random Augmented Images Part 2.mp4
    11:28
  • 6. Read this Lesson if you have issues visualising your images.html
  • 7. Creating Data Augmentation Function Part 1.mp4
    17:25
  • 8. Creating Data Augmentation Function Part 2.mp4
    18:54
  • 9. Checking Image Counts Before running the Function.mp4
    02:10
  • 10. Correctional Video regarding our Function.mp4
    03:56
  • 11. Augmenting Test Dataset and Creating test .lst Files.mp4
    06:20
  • 12. Augmenting Train Dataset and Creating .lst File Part 1.mp4
    05:10
  • 13. Augmenting Train Dataset and Creating .lst File Part 2.mp4
    00:56
  • 14. Verifying that Data Augmentation has Worked.mp4
    06:53
  • 1. Increasing Service Quotas.mp4
    04:51
  • 2. Installing dependencies and Packages.mp4
    06:31
  • 3. Creating our RecordIO Files.mp4
    10:57
  • 4. Uploading our RecordIO data to our S3 bucket.mp4
    11:17
  • 5. Downloading Object Detection Algorithm from AWS ECR.mp4
    05:15
  • 6. Setting up our Estimator Object.mp4
    07:37
  • 7. Setting up Hyperparameters.mp4
    21:07
  • 8. Additional Information for Hyperparameter Tuning in AWS.html
  • 9. Setting up Hyperparameter Ranges.mp4
    10:07
  • 10. Setting up Hyperparameter Tuner.mp4
    09:15
  • 11. Additional Information about mAP( mean average precision).html
  • 12. Starting the Training Job Part 1.mp4
    08:01
  • 13. Starting the Training Job Part 2.mp4
    18:40
  • 14. More on mAP Scores.html
  • 15. Monitoring the Training Job.mp4
    04:18
  • 16. Looking at our Finished Hyperparameter Tuning Job.mp4
    08:33
  • 1. Deploying our Model in a Notebook.mp4
    07:09
  • 2. Creating Visualization Function for Inferences.mp4
    18:35
  • 3. Testing our Endpoint Part 1.mp4
    07:00
  • 4. Testing out Endpoint Part 2.mp4
    04:46
  • 5. Testing our Endpoint from Random Images from the Internet.mp4
    06:59
  • 1. Setting up Batch Transformation Job locally first.mp4
    16:30
  • 2. Starting our Batch Transformation Job.mp4
    07:18
  • 3. Analysing our Batch Transformation Job.mp4
    07:48
  • 4. Visualising Batch Transformation Results.mp4
    15:56
  • 5. Look at this lesson if you have trouble with the Visualisations.html
  • 1. Read this Before Watching the Next Lesson.html
  • 2. Setting up AWS Step Function.mp4
    13:11
  • 3. Verify that CloudFormation has worked.mp4
    02:59
  • 4. Configure Batch Transform Lambda Part 1.mp4
    11:27
  • 5. Configure Batch Transform Lambda Part 2.mp4
    14:26
  • 6. Create Check Batch Transform Job Lambda.mp4
    09:24
  • 7. Fixing typos and Syntax Erros.mp4
    07:21
  • 8. JSON output Format.mp4
    03:34
  • 9. Creating Cleaning Batch output Lambda Function Part 1.mp4
    10:00
  • 10. Creating Cleaning Batch output Lambda Function Part 2.mp4
    20:53
  • 11. Configuring our Step Function Part 1.mp4
    11:08
  • 12. Configuring our Step Function Part 2.mp4
    13:06
  • 13. Configuring our Step Function Part 3.mp4
    06:18
  • 14. Upload Test Data to S3.mp4
    02:46
  • 15. Testing our Step Function.mp4
    02:35
  • 16. Fixing Errors.mp4
    05:00
  • 17. Testing our Step Function with the Corrections.mp4
    01:03
  • 18. Verifying that our Step Function Ran Successfully.mp4
    01:46
  • 19. Donwloading our JSON file from S3.mp4
    01:51
  • 20. Using Event Bridge to set up Cron Job for our Machine Learning Pipeline.mp4
    08:50
  • 21. Verify that the Cron Job works.mp4
    01:37
  • 22. Verifying that our Pipeline Ran Successfully.mp4
    03:54
  • 23. Setting up Production Notebook.mp4
    01:02
  • 24. Extending Our Machine Learning Pipeline.mp4
    04:19
  • 25. Coding our Process Job Notebook Part 1.mp4
    07:57
  • 26. Coding our Process Job Notebook Part 2.mp4
    09:03
  • 27. Coding our Process Job Notebook Part 3.mp4
    12:57
  • 28. Coding our Process Job Notebook Part 4.mp4
    14:56
  • 29. Verifying that the Images have been Saved Properly.html
  • 30. Productionizing our Notebook Part 1.mp4
    12:18
  • 31.1 Link to the Trust Policy.html
  • 31. Productionizing our Notebook Part 2.mp4
    14:47
  • 32. Verify that the Entire Machine Learning Pipeline works.mp4
    04:58
  • 33. Deleted Unused items from Sagemaker EFS.mp4
    03:00
  • 1. Clone the Web Application from Github.mp4
    02:45
  • 2. Setup MongoDB.mp4
    03:48
  • 3. Connect to MongoDB and get AWS Credentials.mp4
    05:51
  • 4. Configuring Env file.mp4
    01:52
  • 5. Install Node modules.mp4
    02:04
  • 6.1 Article About Next.js proxy server.html
  • 6. MERN app Walkthrough Part 1.mp4
    13:57
  • 7. MERN app Walkthrough Part 2.mp4
    08:04
  • 8. MERN app Walkthrough Part 3.mp4
    15:20
  • 9. Output Images Explanation.html
  • 10. MERN app Walkthrough Part 4.mp4
    10:43
  • 11. MERN app Walkthrough Part 5.mp4
    07:24
  • 1. Clean Up Resources.mp4
    06:50
  • 2. Congratulations.mp4
    01:14
  • Description


    Use AWS Step Functions + Sagemaker to Build a Scalable Production Ready Machine Learning Pipeline for Plastic Detection

    What You'll Learn?


    • Learn how you can use Google's Open Images Dataset V7 to use any custom dataset you want
    • Create Sagemaker Domains
    • Upload and Stream data into you Sagemaker Environment
    • Learn how to set up secure IAM roles on AWS
    • Build a Production Ready Object detection Algorithm
    • Use Pandas, Numpy for Feature and Data Engineering
    • Understanding Object detection annotations
    • Visualising Images and Bounding Boxes with Matplotlib
    • Learn how Sagemaker's Elastic File System(EFS) works
    • Use AWS' built in Object detection detection algorithm with Transfer Learning
    • How to set up Transfer Learning with both VGG-16 and ResNet-50 in AWS
    • Learn how to save images to RecordIO format
    • Learn what RecordIO format is
    • Learn what .lst files are and why we need them with Object Detection in AWS
    • Learn how to do Data Augmentation for Object detection
    • Gain insights into how we can manipulate our input data with data augmentation
    • Learn AWS Pricing for SageMaker, Step Functions, Batch Transformation Jobs, Sagemaker EFS, and many more
    • Learn how to choose the ideal compute(Memory, vCPUs, GPUS and kernels) for your Sagemaker tasks
    • Learn how to install dependencies to a Sagemaker Notebook
    • Setup Hyperparameter Tuning Jobs in AWS
    • Set up Training Jobs in AWS
    • Learn how to Evaluate Object detection models with mAP(mean average precision) score
    • Set up Hyperparameter tuning jobs with Bayesian Search
    • Learn how you can configure Batch Size, Epochs, optimisers(Adam, RMSProp), Momentum, Early stopping, Weight decay, overfitting prevention and many more in AWS
    • Monitor a Training Job in Real time with Metrics
    • Use Cloudwatch to look at various logs
    • How to Test your model in a Sagemaker notebook
    • Learn what Batch Transformation is
    • Set up Batch Transformation Jobs
    • How to use Lambda functions
    • Saving outputs to S3 bucket
    • Prepare Training and Test Datasets
    • Data Engineering
    • How to build Complex Production Ready Machine Learning Pipelines with AWS Step Functions
    • Use any custom dataset to build an Object detection model
    • Use AWS Cloudformation with AWS Step Functions to set up a Pipeline
    • Learn how to use Prebuilt Pipelines to Configure to your own needs
    • Learn how you can Create any Custom Pipelines with Step Functions(with GUI as well)
    • Learn how to Integrate Lambda Functions with AWS Step Functions
    • Learn how to Create and Handle Asynchronous Machine Learning Pipelines
    • How to use Lambda to read and write from S3
    • AWS best practices
    • Using AWS EventBridge to setup CRON jobs to tell you Pipeline when to Run
    • Learn how to Create End-to-End Machine Learning Pipelines
    • Learn how to Use Sagemaker Notebooks in Production and Schedule Jobs with them
    • Learn Machine Learning Pipeline Design
    • Create a MERN stack web app to interact with our Machine Learning Pipeline
    • How to set up a production ready Mongodb database for our Web App
    • Learn how to use React, Nextjs, Mongodb, ExpressJs to build a web application
    • Create and Interact with JSON files
    • Put Convolutional Neural Networks into Production
    • Deep Learning Techniques
    • How to clean up an AWS account after you are done
    • Train Machine Learning models on AWS
    • How to use AWS' GPUs to speed up Machine Learning Training jobs
    • Learn what AWS Elastic Container Registry(ECS) is and how you can download Machine Learning Algorithms from it
    • AWS Security Best practices

    Who is this for?


  • For developers who want to take their machine learning skills to the next lever by being able to not only build machine learning models, but also incorporate them in a complex, secure production ready machine learning pipeline
  • More details


    Description

    Welcome to the ultimate course on creating a scalable, secure, complex machine learning pipeline with Sagemaker, Step Functions, and Lambda functions. In this course, we will cover all the necessary steps to create a robust and reliable machine learning pipeline, from data preprocessing to hyperparameter tuning for object detection.

    We will start by introducing you to the basics of AWS Sagemaker, a fully-managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning models quickly and easily. You will learn how to use Sagemaker to preprocess and prepare your data for machine learning, as well as how to build and train your own machine learning models using Sagemaker's built-in algorithms.

    Next, we will dive into AWS Step Functions, which allow you to coordinate and manage the different steps of your machine learning pipeline. You will learn how to create a scalable, secure, and robust machine learning pipeline using Step Functions, and how to use Lambda functions to trigger your pipeline's different steps.

    In addition, we will cover deep learning related topics, including how to use neural networks for object detection, and how to use hyperparameter tuning to optimize your machine learning models for different use cases.

    Finally, we will walk you through the creation of a web application that will interact with your machine learning pipeline. You will learn how to use React, Next.js, Express, and MongoDB to build a web app that will allow users to submit data to your pipeline, view the results, and track the progress of their jobs.

    By the end of this course, you will have a deep understanding of how to create a scalable, secure, complex machine learning pipeline using Sagemaker, Step Functions, and Lambda functions. You will also have the skills to build a web app that can interact with your pipeline, opening up new possibilities for how you can use your machine learning models to solve real-world problems.

    Who this course is for:

    • For developers who want to take their machine learning skills to the next lever by being able to not only build machine learning models, but also incorporate them in a complex, secure production ready machine learning pipeline

    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Patrik Szepesi
    Patrik Szepesi
    Instructor's Courses
    I am an AWS certified machine learning engineer , working as a machine learning engineer at Blue River Technology, a Silicon Valley company creating computer vision machine learning solutions(such as autonomous vehicles ) for John Deere. I have worked as a data scientist at companies like Morgan Stanley, and I am also participating in several artificial intelligence related researches with Óbuda University. I am here to share the most cutting edge technologies surrounding machine learning and AWS.
    Students take courses primarily to improve job-related skills.Some courses generate credit toward technical certification. Udemy has made a special effort to attract corporate trainers seeking to create coursework for employees of their company.
    • language english
    • Training sessions 120
    • duration 16:17:24
    • Release Date 2023/04/25