Companies Home Search Profile

Generative AI, LLM MODELS, Full Stack 15+ Projects

Focused View

Fikrat Gasimov

8:54:45

286 View
  • 1.1 webapp.pptx
  • 1. Course Summary.mp4
    06:29
  • 2. React Hooks.mp4
    06:44
  • 3. React DOM.mp4
    07:36
  • 4. React Rest API&Axios.mp4
    07:41
  • 5. Flask Rest API.mp4
    04:34
  • 6. Javascript Basics Concepts.mp4
    07:05
  • 7. Javascript Advance concepts.mp4
    07:52
  • 1. WebApp-Object Detection Demo.mp4
    05:14
  • 2. YoloV7 Fast Inference Demo.mp4
    00:52
  • 1.1 wepapp_state_diagram.drawio.zip
  • 1. Overall Flow State Diagram for Inference Web APP.mp4
    02:44
  • 2. Docker File Configuration.mp4
    08:00
  • 3. Docker Build and Set Up.mp4
    02:07
  • 4. How to Run Docker RUN.mp4
    07:59
  • 5. Configuration of Docker Container with Visual Code.mp4
    03:56
  • 1. Yolov7 Start Implementation.mp4
    03:11
  • 2. Yolov7 Server Implementation 2.mp4
    11:09
  • 3. Yolov7 Server Implementation 3.mp4
    09:46
  • 4. Yolov7 Server Implementation 4.mp4
    12:14
  • 5. Yolov7 Server Implementation 5.mp4
    12:53
  • 6. Yolov7 Server Implementation 6.mp4
    07:50
  • 1. Flask Server Implementation 1.mp4
    16:40
  • 2. Flask Server Implementation 2.mp4
    10:00
  • 3. Flask Server Sign In Implementation.mp4
    09:57
  • 4. Flask Server Registration Implementation.mp4
    06:57
  • 1.1 images.zip
  • 1.2 Images.zip
  • 1.3 models.zip
  • 1. Flask Server & Yolov7 Integration.mp4
    13:33
  • 2. Flask Server & Yolov7 Integration part 2.mp4
    05:37
  • 3. Flask Server & Yolov7 Integration part 3.mp4
    04:52
  • 1. Flask Server & Web APP design part 1.mp4
    13:54
  • 2. Flask Web App DL Inference.mp4
    09:54
  • 3. Flask Web App DL Image Inference.mp4
    07:43
  • 4.1 flow diagram for back-end&front-end.drawio.zip
  • 4. Flow Diagram for Back-End&Front-End.mp4
    03:40
  • 1.1 inference web app.drawio.zip
  • 1. Custom Web App Emotion Detection, BERT, Hugging FACE, React JS, Flask, MySql.mp4
    04:55
  • 2.1 cleaning_dataset.drawio.zip
  • 2. How to start for Prototyping Large Language Model with Web APP and Flask.mp4
    03:14
  • 3. BERT & Hugging Face Feature Engineering Part 1.mp4
    10:51
  • 4. Feature Engineering and Preprocessing part 2.mp4
    12:27
  • 5. Feature Engineering and Preprocessing part 3.mp4
    08:24
  • 6. Feature Engineering and Preprocessing part 4.mp4
    04:08
  • 1. Dataloader,Hugging Face Integration.mp4
    04:45
  • 2. Dataloader,Hugging Face Integration Part 2.mp4
    11:52
  • 3. Dataloader,Hugging Face Integration Part 3.mp4
    15:40
  • 1. BERT_FINE Part 1.mp4
    12:58
  • 1.1 models.zip
  • 1. Bert Model Train&Val part 1.mp4
    07:41
  • 2. training part 2.mp4
    07:35
  • 3. train and val part 3.mp4
    16:45
  • 4. Train&Val successful.mp4
    04:32
  • 1.1 pretrained.zip
  • 1. Pretrained Model Bert and Tokenizer download.mp4
    02:07
  • 2. Where we are and where we have to .mp4
    02:31
  • 3. preprocessing setup.mp4
    05:16
  • 4. Model BackBone setup.mp4
    02:49
  • 5. Model Inference Part 1.mp4
    06:48
  • 6. Model Inference Part 2.mp4
    05:11
  • 1. Flask Server & Inference Part 1.mp4
    04:09
  • 2. Flask Server & Inference Part 2.mp4
    10:10
  • 3. Flask Server & Inference Part 3.mp4
    07:57
  • 1. React Familiarity.mp4
    07:47
  • 2. React Installation.mp4
    05:30
  • 3. React set up part 1.mp4
    01:58
  • 4.1 emotion_detection.zip
  • 4.2 question-answering.zip
  • 4.3 questions.zip
  • 4. react successful installation.mp4
    04:46
  • 5. Main React Component.mp4
    22:33
  • 6. Evaluate Implementation.mp4
    04:21
  • 7. Emotion Analysis component.mp4
    07:32
  • 8. User FeedBackk Route API.mp4
    10:35
  • 9. Non User Feedback Route API.mp4
    07:02
  • 10. Emotion Analysis Implementation Return.mp4
    11:51
  • 11. Demo Emotion Analysis Successfully Implementated.mp4
    08:47
  • 1.1 presentation.zip
  • 1. Demo Transformer-React.mp4
    01:52
  • 2. React Question Answer Component.mp4
    09:43
  • 3. React Question Answer Component 2.mp4
    04:04
  • 4. LLM Transormer Explanation.mp4
    04:55
  • 5. Flask Route Based Implementation.mp4
    05:29
  • 1. CPlus_Cplus TensorRT&Onnx With YoloV4.mp4
    02:45
  • 2. How to implement Onnx Cplus_cplus with YoloV5 Inference.mp4
    01:47
  • Description


    Core Practical Generative AI, LLM, Javascript Applications for 20X Fast Inference Prototypes. Get Hired: Generative AI

    What You'll Learn?


    • What is Docker and How to use Docker
    • Advance Docker Usage
    • What are OpenCL and OpenGL and when to use ?
    • (LAB) Tensorflow and Pytorch Installation, Configuration with Docker
    • (LAB)DockerFile, Docker Compile and Docker Compose Debug file configuration
    • (LAB)Different YOLO version, comparisons, and when to use which version of YOLO according to your problem
    • (LAB)Jupyter Notebook Editor as well as Visual Studio Coding Skills
    • (LAB)Learn and Prepare yourself for full stack and c++ coding exercies
    • (LAB)TENSORRT PRECISION FLOAT 32/16 MODEL QUANTIZIATION
    • Key Differences:Explicit vs. Implicit Batch Size
    • (LAB)TENSORRT PRECISION INT8 MODEL QUANTIZIATION
    • (LAB) Visual Studio Code Setup and Docker Debugger with VS and GDB Debugger
    • (LAB) what is ONNX framework C Plus and how to use apply onnx to your custom C ++ problems
    • (LAB) What is TensorRT Framework and how to use apply to your custom problems
    • (LAB) Custom Detection, Classification, Segmentation problems and inference on images and videos
    • (LAB) Basic C ++ Object Oriented Programming
    • (LAB) Advance C ++ Object Oriented Programming
    • (LAB) Deep Learning Problem Solving Skills on Edge Devices, and Cloud Computings with C++ Programming Language
    • (LAB) How to generate High Performance Inference Models on Embedded Device, in order to get high precision, FPS detection as well as less gpu memory consumption
    • (LAB) Visual Studio Code with Docker
    • (LAB) GDB Debugger with SonarLite and SonarCube Debuggers
    • (LAB) yolov4 onnx inference with opencv c++ dnn libraries
    • (LAB) yolov5 onnx inference with opencv c++ dnn libraries
    • (LAB) yolov5 onnx inference with Dynamic C++ TensorRT Libraries
    • (LAB) C++(11/14/17) compiler programming exercies
    • Key Differences: OpenCV AND CUDA/ OPENCV AND TENSORRT
    • (LAB) Deep Dive on React Development with Axios Front End Rest API
    • (LAB) Deep Dive on Flask Rest API with REACT with MySql
    • (LAB) Deep Dive on Text Summarization Inference on Web App
    • (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App
    • (LAB) Deep Dive On Distributed GPU Programming with Natural Language Processing (Large Language Models))
    • (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App
    • (LAB) Deep Dive on Generative AI use cases, project lifecycle, and model pre-training
    • (LAB) Fine-tuning and evaluating large language models
    • (LAB) Reinforcement learning and LLM-powered applications, ALIGN Fine tunning with User Feedback
    • (LAB) Quantization of Large Language Models with Modern Nvidia GPU's
    • (LAB) C++ OOP TensorRT Quantization and Fast Inference
    • (LAB) Deep Dive on Hugging FACE Library
    • (LAB)Translation ● Text summarization ● Question answering
    • (LAB)Sequence-to-sequence models, ONLY Encoder Based Models, Only Decoder Based Models
    • (LAB)Define the terms Generative AI, large language models, prompt, and describe the transformer architecture that powers LLMs
    • (LAB)Discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint
    • (LAB)Describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks
    • (LAB)Explain how PEFT decreases computational cost and overcomes catastrophic forgetting
    • (LAB)Describe how RLHF uses human feedback to improve the performance and alignment of large language models
    • (LAB)Discuss the challenges that LLMs face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challen
    • Recognize and understand the various strategies and techniques used in fine-tuning language models for specialized applications.
    • Master the skills necessary to preprocess datasets effectively, ensuring they are in the ideal format for AI training.
    • Investigate the vast potential of fine-tuned AI models in practical, real-world scenarios across multiple industries.
    • Acquire knowledge on how to estimate and manage the costs associated with AI model training, making the process efficient and economic
    • Distributing Computing for (DDP) Distributed Data Parallelization and Fully Shared Data Parallel across multi GPU/CPU with Pytorch together with Retrieval Augme
    • The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach

    Who is this for?


  • University Students
  • New Graduates
  • Workers
  • Those want to deploy Deep Learning Models on Edge Devices.
  • AI experts
  • Embedded Software Engineer
  • Natural Language Developers
  • Machine Learning & Deep Learning Engineerings
  • Full Stack Developers, Javascript, Python
  • What You Need to Know?


  • In order to understand this course, candidates needs follows basically course of : Tensorflow-Pytorch-TensorRT-ONNX-From Zero to Hero(YOLOVX.
  • Basic C++ programming Knowledge
  • Basic C Programming Knowledge
  • Local Nvidia GPU Device
  • Basic Natural Language Processing Knowledge
  • Basic Python Knowledge
  • Basic HTML, CSS, BootStrap Knowledge
  • More details


    Description

    This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their  deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on  TensorFlow , Pytorch,  Keras models, HuggingFace with Docker Service.

    In addition, one will be able to optimize and quantize TensorRT frameworks for deployment in variety of sectors. Moreover, They will learn deployment of LLM quantized model to Web Pages developed with React, Javascript and FLASK
    Here you will also learn how  to integrate Reinforcement Learning(PPO) to Large Language Model, in order to fine them with Human Feedback based. 
    Candidates will learn how to code and debug in C/C++ Programming languages at least in intermediate level.

    LLM Models used:

    • The Falcon,

    • LLAMA2,

    • BLOOM,

    • MPT,

    • Vicuna,

    • FLAN-T5,

    • GPT2/GPT3, GPT NEOX

    • BERT 101, Distil BERT

    • FINE-Tuning Small Models under supervision of BIG Models

    • and soo onn...


    1. Learning and Installation of Docker from scratch

    2. Knowledge of Javscript, HTML ,CSS, Bootstrap

    3. React Hook, DOM and Javacscript Web Development

    4. Deep Dive on Deep Learning Transformer based Natural Language Processing

    5. Python FLASK  Rest API along with MySql

    6. Preparation of DockerFiles, Docker Compose as well as Docker Compose Debug file

    7. Configuration and Installation of Plugin packages in Visual Studio Code

    8. Learning, Installation and Confguration of frameworks such as Tensorflow, Pytorch, Kears with docker images from scratch

    9. Preprocessing and Preparation of Deep learning datasets for training and testing

    10. OpenCV  DNN with C++ Inference

    11. Training, Testing and Validation of Deep Learning frameworks

    12. Conversion of prebuilt models to Onnx  and Onnx Inference on images with C++ Programming

    13. Conversion of onnx model to TensorRT engine with C++ RunTime and Compile Time API

    14. TensorRT engine Inference on images and videos

    15. Comparison of achieved metrices and result between TensorRT and Onnx Inference

    16. Prepare Yourself for C++ Object Oriented Programming Inference!

    17. Ready to solve any programming challenge with C/C++

    18. Read to tackle Deployment issues on Edge Devices as well as Cloud Areas

    19. Large Language Models Fine Tunning

    20. Large Language Models Hands-On-Practice: BLOOM, GPT3-GPT3.5, FLAN-T5 family

    21. Large Language Models Training, Evaluation and User-Defined Prompt IN-Context Learning/On-Line Learning

    22. Human FeedBack Alignment on LLM with Reinforcement Learning (PPO) with Large Language Model : BERT and FLAN-T5

    23. How to Avoid Catastropich Forgetting Program on Large Multi-Task LLM Models.

    24. How to prepare LLM for Multi-Task Problems such as Code Generation, Summarization, Content Analizer, Image Generation.

    25. Quantization of Large Language Models with various existing state-of-art techniques


    • Importante Note:
            In this course, there is not nothing to copy & paste, you will put your hands in every line of project to be successfully LLM and Web Application Developer!

    You DO NOT need any Special Hardware component. You will be delivering project either on CLOUD or on Your Local Computer.



    Who this course is for:

    • University Students
    • New Graduates
    • Workers
    • Those want to deploy Deep Learning Models on Edge Devices.
    • AI experts
    • Embedded Software Engineer
    • Natural Language Developers
    • Machine Learning & Deep Learning Engineerings
    • Full Stack Developers, Javascript, Python

    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Fikrat Gasimov
    Fikrat Gasimov
    Instructor's Courses
    I am Fikrat Gasimov. I am Full Stack C/C++ Qt-QMl, Embedded Software Developer as well as AI Expert. My first experience was as a Deep Learning researcher  in autonomous driving cars. I have been working on diverse sectors, such as developing algorithms  as well as Apps for Android and IoS mobile devices. In addition, I have been working on diversi mobile robots such as unitree, japanese mobile robots , making them fully automous , apart from that, i am prototyping my own drones interms of software and hardware. Moreover, I am developing drivers for edge devices for bootloaders and camera servos, as well deep learning algorithms, ground control stations to control them remotely.I have 5 years of experience in sectors, such as Unity 3D, Automotive, AI & Machine Learning, Drones, as well as Cloud Systems.
    Students take courses primarily to improve job-related skills.Some courses generate credit toward technical certification. Udemy has made a special effort to attract corporate trainers seeking to create coursework for employees of their company.
    • language english
    • Training sessions 72
    • duration 8:54:45
    • Release Date 2024/07/20