Companies Home Search Profile

Introduction to Attention-Based Neural Networks

Focused View

Janani Ravi

2:11:12

270 View
  • 01 - Prerequisites.mp4
    00:54
  • 02 - What are attention-based models.mp4
    02:58
  • 03 - Attention in language generation and translation models.mp4
    03:06
  • 01 - Feed forward networks and their limitations.mp4
    05:17
  • 02 - Recurrent neural networks for sequential data.mp4
    05:33
  • 03 - The need for long memory cells.mp4
    05:00
  • 04 - LSTM and GRU cells.mp4
    05:04
  • 05 - Types of RRNNS.mp4
    03:40
  • 01 - Language generation models.mp4
    05:06
  • 02 - Sequence to sequence models for language translation.mp4
    04:36
  • 01 - The role of attention in sequence to sequence models.mp4
    04:53
  • 02 - Attention mechanism in sequence to sequence models.mp4
    06:21
  • 03 - Alignment weights in attention models.mp4
    02:25
  • 04 - Bahdanau attention.mp4
    03:28
  • 05 - Attention models for image captioning.mp4
    03:49
  • 06 - Encoder decoder structure for image captioning.mp4
    03:45
  • 01 - Setting up Colab and Google Drive.mp4
    04:07
  • 02 - Loading in the Flickr8k dataset.mp4
    03:41
  • 03 - Constructing the vocabulary.mp4
    04:37
  • 04 - Setting up the dataset class.mp4
    03:02
  • 05 - Implementing utility functions for training data.mp4
    05:12
  • 06 - Building the encoder CNN.mp4
    04:11
  • 07 - Building the decoder RNN.mp4
    05:42
  • 08 - Setting up the sequence to sequence model.mp4
    02:49
  • 09 - Training the image captioning model.mp4
    03:53
  • 01 - Loading the dataset and setting up utility functions.mp4
    03:36
  • 02 - The encoder CNN generating unrolled feature maps.mp4
    03:54
  • 03 - Implementing Bahdanau attention.mp4
    02:44
  • 04 - The decoder RNN using attention.mp4
    05:41
  • 05 - Generating captions using attention.mp4
    02:25
  • 06 - Training the attention-based image captioning model.mp4
    05:16
  • 07 - Visualizing the model's attention.mp4
    02:31
  • 01 - Summary and next steps.mp4
    01:56
  • Description


    Attention-based models allow neural networks to focus on the most important features of the input, thus producing better results at the output. In this course, Janani Ravi explains how recurrent neural networks work and builds and trains two image captioning models one without attention and another using attention models and compares their results. If you have some experience and understanding of how neural networks work and want to see what attention-based models can do for you, check out this course.

    More details


    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Category
    LinkedIn Learning is an American online learning provider. It provides video courses taught by industry experts in software, creative, and business skills. It is a subsidiary of LinkedIn. All the courses on LinkedIn fall into four categories: Business, Creative, Technology and Certifications. It was founded in 1995 by Lynda Weinman as Lynda.com before being acquired by LinkedIn in 2015. Microsoft acquired LinkedIn in December 2016.
    • language english
    • Training sessions 33
    • duration 2:11:12
    • Release Date 2022/12/11

    Courses related to Neural Networks