Companies Home Search Profile

Performance Tuning Deep Learning in Python - A Masterclass

Focused View

Mike West

4:59:49

101 View
  • 01.01-introduction.mp4
    01:54
  • 01.02-course_overview.mp4
    01:49
  • 01.03-is_this_course_right_for_you.mp4
    01:07
  • 01.04-course_structure.mp4
    01:08
  • 01.05-neural_network_defined.mp4
    02:56
  • 01.06-framework_for_optional_learning.mp4
    02:15
  • 01.07-optimal_generalization_techniques.mp4
    02:53
  • 01.08-optimal_prediction_techniques.mp4
    03:27
  • 01.09-framework_application.mp4
    02:56
  • 01.10-diagnostic_learning_curves.mp4
    02:56
  • 01.11-the_fit_of_the_model.mp4
    02:55
  • 01.12-unrepresentative_dataset.mp4
    01:49
  • 02.01-neural_networks_learn_a_mapping_function.mp4
    03:00
  • 02.02-error_surface.mp4
    02:02
  • 02.03-features_of_the_error_surface.mp4
    02:27
  • 02.04-non-convex_error_surface.mp4
    03:14
  • 02.05-deep_learning_neural_network_components_part_1.mp4
    03:25
  • 02.06-deep_learning_neural_network_components_part_2.mp4
    02:32
  • 02.07-neural_network_model_capacity.mp4
    01:50
  • 02.08-anatomy_of_a_keras_model.mp4
    06:10
  • 02.09-demo_case_study_on_model_capacity_part_1.mp4
    01:51
  • 02.10-demo_case_study_on_model_capacity_part_2.mp4
    03:31
  • 02.11-demo_case_study_on_model_capacity_part_3.mp4
    02:55
  • 02.12-gradient_precision_with_batch_size.mp4
    03:57
  • 02.13-demo_case_study_on_batch_size_part_1.mp4
    03:08
  • 02.14-demo_case_study_on_batch_size_part_2.mp4
    03:47
  • 02.15-demo_case_study_on_batch_size_part_3.mp4
    01:15
  • 02.16-loss_function_defined.mp4
    02:39
  • 02.17-choosing_a_loss_function.mp4
    01:44
  • 02.18-demo_case_study_on_regression_loss_functions_part_1.mp4
    02:22
  • 02.19-demo_case_study_on_regression_loss_functions_part_2.mp4
    04:42
  • 02.20-demo_case_study_on_binary_classification_loss_functions_part_1.mp4
    02:11
  • 02.21-demo_case_study_on_binary_classification_loss_functions_part_2.mp4
    01:50
  • 02.22-demo_case_study_on_binary_classification_loss_functions_part_3.mp4
    03:03
  • 02.23-demo_case_study_on_multiclass_classification_loss_functions_part_1.mp4
    02:31
  • 02.24-demo_case_study_on_multiclass_classification_loss_functions_part_2.mp4
    03:27
  • 02.25-learning_rate_defined.mp4
    02:57
  • 02.26-configuring_the_learning_rate.mp4
    02:08
  • 02.27-learning_rate_schedules_and_adaptive_learning_rates.mp4
    02:18
  • 02.28-defining_learning_rates_in_keras.mp4
    02:49
  • 02.29-demo_case_study_on_learning_rates_part_1.mp4
    02:35
  • 02.30-demo_case_study_on_learning_rates_part_2.mp4
    04:15
  • 02.31-demo_case_study_on_learning_rates_part_3.mp4
    04:04
  • 02.32-demo_case_study_on_learning_rates_part_4.mp4
    02:12
  • 02.33-data_scaling.mp4
    01:45
  • 02.34-scaling_the_input_and_output_variables.mp4
    01:34
  • 02.35-normalize_and_standardize_(rescaling).mp4
    01:43
  • 02.36-demo_case_study_on_data_scaling_part_1.mp4
    01:34
  • 02.37-demo_case_study_on_data_scaling_part_2.mp4
    01:58
  • 02.38-demo_case_study_on_data_scaling_part_3.mp4
    01:49
  • 02.39-demo_case_study_on_data_scaling_part_4.mp4
    03:18
  • 02.40-activation_functions_and_vanishing_gradients.mp4
    02:49
  • 02.41-rectified_linear_activation_function_defined_and_implemented_in_python.mp4
    02:51
  • 02.42-when_relu_is_the_appropriate_choice.mp4
    01:07
  • 02.43-demo_case_study_on_vanishing_gradients_part_1.mp4
    03:08
  • 02.44-demo_case_study_on_vanishing_gradients_part_2.mp4
    02:31
  • 02.45-correct_exploding_gradients_with_clipping.mp4
    03:42
  • 02.46-gradient_clipping_in_keras.mp4
    01:24
  • 02.47-demo_case_study_on_exploding_gradients_part_1.mp4
    02:19
  • 02.48-demo_case_study_on_exploding_gradients_part_2.mp4
    01:48
  • 02.49-batch_normalization.mp4
    02:22
  • 02.50-tips_for_applying_batch_normalization.mp4
    01:59
  • 02.51-demo_case_study_on_batch_normalization_part_1.mp4
    02:40
  • 02.52-demo_case_study_on_batch_normalization_part_2.mp4
    02:43
  • 02.53-demo_greedy_layer-wise_pretraining_case_study_part_1.mp4
    03:34
  • 02.54-demo_greedy_layer-wise_pretraining_case_study_part_2.mp4
    04:06
  • 03.01-the_problem_of_overfitting.mp4
    02:44
  • 03.02-reduce_overfitting_by_constraining_complexity.mp4
    01:57
  • 03.03-regularization_approaches_for_neural_networks.mp4
    02:35
  • 03.04-penalize_large_weights_via_regularization.mp4
    02:05
  • 03.05-how_to_penalize_large_weights.mp4
    01:58
  • 03.06-tips_for_using_weight_regularization.mp4
    02:14
  • 03.07-demo_weight_regularization_case_study_part_1.mp4
    01:32
  • 03.08-demo_weight_regularization_case_study_part_2.mp4
    04:01
  • 03.09-activity_regularization.mp4
    02:11
  • 03.10-encouraging_smaller_activations.mp4
    02:48
  • 03.11-tips_for_activity_regularization.mp4
    02:48
  • 03.12-activity_regularization_in_keras.mp4
    02:35
  • 03.13-demo_activity_regularization_case_study.mp4
    03:19
  • 03.14-forcing_small_weights.mp4
    02:45
  • 03.15-how_to_use_a_weight_constraint.mp4
    01:19
  • 03.16-tips_for_applying_weight_constraints.mp4
    01:30
  • 03.17-weight_constraints_in_keras.mp4
    01:42
  • 03.18-demo_weight_constraint_case_study.mp4
    02:56
  • 03.19-dropout.mp4
    02:02
  • 03.20-dropout_mechanics.mp4
    01:27
  • 03.21-dropout_tips.mp4
    02:21
  • 03.22-dropout_in_keras.mp4
    02:53
  • 03.23-demo_dropout_case_study.mp4
    02:43
  • 03.24-noise_regularization.mp4
    02:46
  • 03.25-how_to_add_noise.mp4
    02:59
  • 03.26-noise_tips.mp4
    01:46
  • 03.27-adding_noise_in_keras.mp4
    02:07
  • 03.28-demo_noise_regularization_case_study.mp4
    03:18
  • 04.01-ensemble_learning.mp4
    02:12
  • 04.02-ensemble_neural_network_models.mp4
    01:06
  • 04.03-varying_the_major_elements.mp4
    04:44
  • 04.04-model_averaging_ensembles.mp4
    01:45
  • 04.05-ensembles_in_keras.mp4
    02:23
  • 04.06-demo_model_averaging_ensemble_case_study_part_1.mp4
    02:45
  • 04.07-demo_model_averaging_ensemble_case_study_part_2.mp4
    02:13
  • 04.08-demo_model_averaging_ensemble_case_study_part_3.mp4
    03:12
  • 04.09-weighted_average_ensembles.mp4
    02:49
  • 04.10-demo_weighted_average_ensemble_case_study_part_1.mp4
    02:42
  • 04.11-demo_weighted_average_ensemble_case_study_part_2.mp4
    02:59
  • 04.12-demo_weighted_average_ensemble_case_study_part_3.mp4
    03:40
  • 04.13-demo_weighted_average_ensemble_case_study_part_4.mp4
    02:22
  • 04.14-resampling_ensembles.mp4
    03:57
  • 04.15-demo_resampling_ensemble_case_study_part_1.mp4
    02:31
  • 04.16-demo_resampling_ensemble_case_study_part_2.mp4
    04:13
  • 04.17-demo_resampling_ensemble_case_study_part_3.mp4
    02:50
  • 04.18-demo_resampling_ensemble_case_study_part_4.mp4
    02:52
  • 04.19-horizontal_voting_ensembles.mp4
    02:17
  • 04.20-demo_horizontal_ensemble_case_study_part_1.mp4
    01:35
  • 04.21-demo_horizontal_ensemble_case_study_part_2.mp4
    03:41
  • 9781803243894_Code.zip
  • Description


    Deep learning neural networks have become easy to create. However, tuning these models for maximum performance remains something of a challenge for most modelers. This course will teach you how to get results as a machine learning practitioner.

    The course starts with an introduction to the problem of overfitting and a tour of regularization techniques. Learn through better configured stochastic gradient descent batch size, loss functions, learning rates, and to avoid exploding gradients via gradient clipping. After that, you’ll learn regularization techniques and reduce overfitting by updating the loss function using techniques such as weight regularization, weight constraints, and activation regularization. Post that, you’ll effectively apply dropout, the addition of noise, and early stopping, and combine the predictions from multiple models.

    You’ll also look at ensemble learning techniques and diagnose poor model training and problems such as premature convergence and accelerate the model training process. Then, you’ll combine the predictions from multiple models saved during a single training run with techniques such as horizontal ensembles and snapshot ensembles.

    Finally, you’ll diagnose high variance in a final model and improve the average predictive skill.

    By the end of this course, you’ll learn different techniques for getting better results with deep learning models.

    All the resource files are uploaded on the GitHub repository at https://github.com/PacktPublishing/Performance-Tuning-Deep-Learning-Models-Master-Class

    More details


    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Mike has Bachelor of Science degrees in Business and Psychology. He started his career as a middle school psychologist prior to moving into the information technology space. His love of computers resulted in him spending many additional hours working on computers while studying for his master's degree in Statistics. His current areas of interests include Machine Learning, Data Engineering and SQL Server. When not working, Mike enjoys spending time with his family and traveling.
    Packt is a publishing company founded in 2003 headquartered in Birmingham, UK, with offices in Mumbai, India. Packt primarily publishes print and electronic books and videos relating to information technology, including programming, web design, data analysis and hardware.
    • language english
    • Training sessions 115
    • duration 4:59:49
    • Release Date 2023/02/26