Companies Home Search Profile

Building Deploying and Scaling LLM Powered Applications

Focused View

LLM Developer

2:27:42

19 View
  • 1.1 Welcome to The Course.pdf
  • 1. Introduction.mp4
    01:28
  • 2. Installing Development Software.mp4
    09:14
  • 1. Building A Prediction Pipeline.mp4
    11:02
  • 2. Testing Prediction Pipeline.mp4
    08:51
  • 1. Installing AWS CLI.mp4
    08:15
  • 2. Setting Up Secrets Manager.mp4
    06:49
  • 1. Injecting API Key From Secrets Manager into Application.mp4
    04:36
  • 2. Feeding API Key Directly Into Prediction Pipeline.mp4
    03:44
  • 1. Building Front End Of Application.mp4
    03:56
  • 2. Testing Front End Of Application.mp4
    02:33
  • 1. Writing Docker File.mp4
    07:01
  • 2. Packaging Our Application and Storing on Elastic Container Registry.mp4
    06:27
  • 3. Testing Our Packaged Application.mp4
    08:07
  • 1. Modifying Account Permissions For Deployment.mp4
    10:00
  • 1. Deploying Our Application on ECS.mp4
    09:30
  • 2. Testing Our Application on ECS.mp4
    05:58
  • 1. Difference between Horizontal Scaling vs Vertical Scaling.mp4
    12:57
  • 2. Building Scalable Service - Adding Load Balancer and Auto Scaling.mp4
    16:30
  • 3. Exposing & Testing Our Service.mp4
    10:44
  • Description


    Course1 - Building and Scaling Text Summarization Service using Langchain, OpenAI and Amazon Web Services

    What You'll Learn?


    • You will Learn to Build a Complete Scalable Software Application Which Is Powered By a Large Language Model And Deploy It At Scale on Amazon Web Services
    • You Will learn to Integrate Your Application's LLM Powered Backend with Streamlit UI Frontend
    • You Will First Learn To Locally Test Your Application , Then Package It Using Docker And Finally Learn The Best Practices For Using Streamlit Inside Docker
    • You Will Learn a Template & Best Practices to Inject your OpenAI's API Keys Into Your Containerized Application At Run Time
    • You Will Learn To Address Vulnerabilities In Your Containerized Application And Best Practices To Resolve Them
    • You Will Learn to Design Your System's Architecture Based On The Components And Design Choices In Your Application
    • You Will Learn the Differences Between Horizontal Scaling and Vertical Scaling
    • You Will Learn in Depth to Apply Serverless Deployment and Learn To apply Load Balancers and Auto Scaling To Your Application
    • You Will Be Able To Apply Your Learnings To Build Deploy & Scale other LLM Powered Langchain Applications

    Who is this for?


  • The focus of this course is to Introduce you to Machine Learning Engineering. This course would enable learners to build deploy and scale an end to end software application which in the backend is powered by a Large Language Model to generate results based on user given input. The main goal of this course is to teach a framework which users can repeat and apply to build other software applications which use LLMs from OpenAI, inject API Keys at run time to avoid key leakage and deploy their respective applications at Scale on Amazon Web Services.
  • This is an intermediate level course and is intended for Developers interested in Developing Deploying and Scaling LLM Powered Applications. The Target Audience for this course are : Software Engineers, Data Scientists, ML Engineers and AI Engineers. However these are definitely not hard requirements, and who want to learn building, testing and deploying software applications at scale are equally welcome.
  • What You Need to Know?


  • Users of this course must know how to write code in Python, Basic Knowledge of Langchain ( though, it will be discussed in the course videos ), Basic Knowledge of AWS. Additionally basic knowledge of Docker is preferred but not required as the required information to package applications for deployment will be taught in the course
  • More details


    Description

    Are you ready to dive deep into the world of Machine Learning Engineering and build powerful software applications? Our Machine Learning Engineering course is designed to equip you with the skills and knowledge to harness the full potential of Langchain, integrate the OpenAI API, deploy applications on AWS Elastic Container Service, and efficiently manage scaling using Load Balancers and Auto Scaling Groups.

    In this hands-on course, you'll learn how to create robust ML applications from the ground up. We'll start by mastering Langchain, a cutting-edge language model, and demonstrate how to seamlessly inject your OpenAI API key into the prediction pipeline at runtime. You'll gain proficiency in designing and developing ML applications that can understand, process, and generate human-like text.

    As you progress, we'll explore the fundamental concepts of Horizontal Scaling and Vertical Scaling, providing a clear understanding of when and how to implement each strategy. You'll then discover how to scale your ML application with ease by deploying Application Load Balancers and Auto Scaling Groups on AWS, ensuring high availability and fault tolerance.

    By the end of this course, you'll be well-versed in building ML-driven software applications, deploying them on AWS, and scaling them to meet the demands of your users. Join us on this exciting journey into the world of Machine Learning Engineering and become a skilled practitioner in this rapidly evolving field.

    Who this course is for:

    • The focus of this course is to Introduce you to Machine Learning Engineering. This course would enable learners to build deploy and scale an end to end software application which in the backend is powered by a Large Language Model to generate results based on user given input. The main goal of this course is to teach a framework which users can repeat and apply to build other software applications which use LLMs from OpenAI, inject API Keys at run time to avoid key leakage and deploy their respective applications at Scale on Amazon Web Services.
    • This is an intermediate level course and is intended for Developers interested in Developing Deploying and Scaling LLM Powered Applications. The Target Audience for this course are : Software Engineers, Data Scientists, ML Engineers and AI Engineers. However these are definitely not hard requirements, and who want to learn building, testing and deploying software applications at scale are equally welcome.

    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    LLM Developer
    LLM Developer
    Instructor's Courses
    I am an ML Engineer with 5 years of NLP Experience. I have experience and developing NLP Applications at Scale and through my knowledge and experience of the Industry I want to build courses which can help others get better at Software Development for Natural Language Processing. The courses I aim to build would focus on first teaching the basics of software development in hands on manner and then use that knowledge to apply to different scenarios and problem types.
    Students take courses primarily to improve job-related skills.Some courses generate credit toward technical certification. Udemy has made a special effort to attract corporate trainers seeking to create coursework for employees of their company.
    • language english
    • Training sessions 19
    • duration 2:27:42
    • Release Date 2023/12/05