Generative AI, LLM MODELS, Full Stack 15+ Projects
Fikrat Gasimov
8:54:45
Description
Core Practical Generative AI, LLM, Javascript Applications for 20X Fast Inference Prototypes. Get Hired: Generative AI
What You'll Learn?
- What is Docker and How to use Docker
- Advance Docker Usage
- What are OpenCL and OpenGL and when to use ?
- (LAB) Tensorflow and Pytorch Installation, Configuration with Docker
- (LAB)DockerFile, Docker Compile and Docker Compose Debug file configuration
- (LAB)Different YOLO version, comparisons, and when to use which version of YOLO according to your problem
- (LAB)Jupyter Notebook Editor as well as Visual Studio Coding Skills
- (LAB)Learn and Prepare yourself for full stack and c++ coding exercies
- (LAB)TENSORRT PRECISION FLOAT 32/16 MODEL QUANTIZIATION
- Key Differences:Explicit vs. Implicit Batch Size
- (LAB)TENSORRT PRECISION INT8 MODEL QUANTIZIATION
- (LAB) Visual Studio Code Setup and Docker Debugger with VS and GDB Debugger
- (LAB) what is ONNX framework C Plus and how to use apply onnx to your custom C ++ problems
- (LAB) What is TensorRT Framework and how to use apply to your custom problems
- (LAB) Custom Detection, Classification, Segmentation problems and inference on images and videos
- (LAB) Basic C ++ Object Oriented Programming
- (LAB) Advance C ++ Object Oriented Programming
- (LAB) Deep Learning Problem Solving Skills on Edge Devices, and Cloud Computings with C++ Programming Language
- (LAB) How to generate High Performance Inference Models on Embedded Device, in order to get high precision, FPS detection as well as less gpu memory consumption
- (LAB) Visual Studio Code with Docker
- (LAB) GDB Debugger with SonarLite and SonarCube Debuggers
- (LAB) yolov4 onnx inference with opencv c++ dnn libraries
- (LAB) yolov5 onnx inference with opencv c++ dnn libraries
- (LAB) yolov5 onnx inference with Dynamic C++ TensorRT Libraries
- (LAB) C++(11/14/17) compiler programming exercies
- Key Differences: OpenCV AND CUDA/ OPENCV AND TENSORRT
- (LAB) Deep Dive on React Development with Axios Front End Rest API
- (LAB) Deep Dive on Flask Rest API with REACT with MySql
- (LAB) Deep Dive on Text Summarization Inference on Web App
- (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App
- (LAB) Deep Dive On Distributed GPU Programming with Natural Language Processing (Large Language Models))
- (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App
- (LAB) Deep Dive on Generative AI use cases, project lifecycle, and model pre-training
- (LAB) Fine-tuning and evaluating large language models
- (LAB) Reinforcement learning and LLM-powered applications, ALIGN Fine tunning with User Feedback
- (LAB) Quantization of Large Language Models with Modern Nvidia GPU's
- (LAB) C++ OOP TensorRT Quantization and Fast Inference
- (LAB) Deep Dive on Hugging FACE Library
- (LAB)Translation ● Text summarization ● Question answering
- (LAB)Sequence-to-sequence models, ONLY Encoder Based Models, Only Decoder Based Models
- (LAB)Define the terms Generative AI, large language models, prompt, and describe the transformer architecture that powers LLMs
- (LAB)Discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint
- (LAB)Describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks
- (LAB)Explain how PEFT decreases computational cost and overcomes catastrophic forgetting
- (LAB)Describe how RLHF uses human feedback to improve the performance and alignment of large language models
- (LAB)Discuss the challenges that LLMs face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challen
- Recognize and understand the various strategies and techniques used in fine-tuning language models for specialized applications.
- Master the skills necessary to preprocess datasets effectively, ensuring they are in the ideal format for AI training.
- Investigate the vast potential of fine-tuned AI models in practical, real-world scenarios across multiple industries.
- Acquire knowledge on how to estimate and manage the costs associated with AI model training, making the process efficient and economic
- Distributing Computing for (DDP) Distributed Data Parallelization and Fully Shared Data Parallel across multi GPU/CPU with Pytorch together with Retrieval Augme
- The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach
Who is this for?
What You Need to Know?
More details
DescriptionThis course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on TensorFlow , Pytorch, Keras models, HuggingFace with Docker Service.
In addition, one will be able to optimize and quantize TensorRT frameworks for deployment in variety of sectors. Moreover, They will learn deployment of LLM quantized model to Web Pages developed with React, Javascript and FLASK
Here you will also learn how to integrate Reinforcement Learning(PPO) to Large Language Model, in order to fine them with Human Feedback based.Â
Candidates will learn how to code and debug in C/C++ Programming languages at least in intermediate level.
LLM Models used:
The Falcon,
LLAMA2,
BLOOM,
MPT,
Vicuna,
FLAN-T5,
GPT2/GPT3, GPTÂ NEOX
BERT 101, Distil BERT
FINE-Tuning Small Models under supervision of BIG Models
and soo onn...
Learning and Installation of Docker from scratch
Knowledge of Javscript, HTML ,CSS, Bootstrap
React Hook, DOM and Javacscript Web Development
Deep Dive on Deep Learning Transformer based Natural Language Processing
Python FLASKÂ Rest API along with MySql
Preparation of DockerFiles, Docker Compose as well as Docker Compose Debug file
Configuration and Installation of Plugin packages in Visual Studio Code
Learning, Installation and Confguration of frameworks such as Tensorflow, Pytorch, Kears with docker images from scratch
Preprocessing and Preparation of Deep learning datasets for training and testing
OpenCVÂ DNNÂ with C++ Inference
Training, Testing and Validation of Deep Learning frameworks
Conversion of prebuilt models to Onnx and Onnx Inference on images with C++ Programming
Conversion of onnx model to TensorRT engine with C++ RunTime and Compile Time API
TensorRT engine Inference on images and videos
Comparison of achieved metrices and result between TensorRT and Onnx Inference
Prepare Yourself for C++ Object Oriented Programming Inference!
Ready to solve any programming challenge with C/C++
Read to tackle Deployment issues on Edge Devices as well as Cloud Areas
Large Language Models Fine Tunning
Large Language Models Hands-On-Practice:Â BLOOM, GPT3-GPT3.5, FLAN-T5 family
Large Language Models Training, Evaluation and User-Defined Prompt IN-Context Learning/On-Line Learning
Human FeedBack Alignment on LLM with Reinforcement Learning (PPO) with Large Language Model : BERT and FLAN-T5
How to Avoid Catastropich Forgetting Program on Large Multi-Task LLM Models.
How to prepare LLM for Multi-Task Problems such as Code Generation, Summarization, Content Analizer, Image Generation.
Quantization of Large Language Models with various existing state-of-art techniques
Importante Note:
   In this course, there is not nothing to copy & paste, you will put your hands in every line of project to be successfully LLM and Web Application Developer!
You DO NOT need any Special Hardware component. You will be delivering project either on CLOUD or on Your Local Computer.
Who this course is for:
- University Students
- New Graduates
- Workers
- Those want to deploy Deep Learning Models on Edge Devices.
- AI experts
- Embedded Software Engineer
- Natural Language Developers
- Machine Learning & Deep Learning Engineerings
- Full Stack Developers, Javascript, Python
This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on TensorFlow , Pytorch, Keras models, HuggingFace with Docker Service.
In addition, one will be able to optimize and quantize TensorRT frameworks for deployment in variety of sectors. Moreover, They will learn deployment of LLM quantized model to Web Pages developed with React, Javascript and FLASK
Here you will also learn how to integrate Reinforcement Learning(PPO) to Large Language Model, in order to fine them with Human Feedback based.Â
Candidates will learn how to code and debug in C/C++ Programming languages at least in intermediate level.
LLM Models used:
The Falcon,
LLAMA2,
BLOOM,
MPT,
Vicuna,
FLAN-T5,
GPT2/GPT3, GPTÂ NEOX
BERT 101, Distil BERT
FINE-Tuning Small Models under supervision of BIG Models
and soo onn...
Learning and Installation of Docker from scratch
Knowledge of Javscript, HTML ,CSS, Bootstrap
React Hook, DOM and Javacscript Web Development
Deep Dive on Deep Learning Transformer based Natural Language Processing
Python FLASKÂ Rest API along with MySql
Preparation of DockerFiles, Docker Compose as well as Docker Compose Debug file
Configuration and Installation of Plugin packages in Visual Studio Code
Learning, Installation and Confguration of frameworks such as Tensorflow, Pytorch, Kears with docker images from scratch
Preprocessing and Preparation of Deep learning datasets for training and testing
OpenCVÂ DNNÂ with C++ Inference
Training, Testing and Validation of Deep Learning frameworks
Conversion of prebuilt models to Onnx and Onnx Inference on images with C++ Programming
Conversion of onnx model to TensorRT engine with C++ RunTime and Compile Time API
TensorRT engine Inference on images and videos
Comparison of achieved metrices and result between TensorRT and Onnx Inference
Prepare Yourself for C++ Object Oriented Programming Inference!
Ready to solve any programming challenge with C/C++
Read to tackle Deployment issues on Edge Devices as well as Cloud Areas
Large Language Models Fine Tunning
Large Language Models Hands-On-Practice:Â BLOOM, GPT3-GPT3.5, FLAN-T5 family
Large Language Models Training, Evaluation and User-Defined Prompt IN-Context Learning/On-Line Learning
Human FeedBack Alignment on LLM with Reinforcement Learning (PPO) with Large Language Model : BERT and FLAN-T5
How to Avoid Catastropich Forgetting Program on Large Multi-Task LLM Models.
How to prepare LLM for Multi-Task Problems such as Code Generation, Summarization, Content Analizer, Image Generation.
Quantization of Large Language Models with various existing state-of-art techniques
Importante Note:
   In this course, there is not nothing to copy & paste, you will put your hands in every line of project to be successfully LLM and Web Application Developer!
You DO NOT need any Special Hardware component. You will be delivering project either on CLOUD or on Your Local Computer.
Who this course is for:
- University Students
- New Graduates
- Workers
- Those want to deploy Deep Learning Models on Edge Devices.
- AI experts
- Embedded Software Engineer
- Natural Language Developers
- Machine Learning & Deep Learning Engineerings
- Full Stack Developers, Javascript, Python
User Reviews
Rating
Fikrat Gasimov
Instructor's Courses
Udemy
View courses Udemy- language english
- Training sessions 72
- duration 8:54:45
- Release Date 2024/07/20