Companies Home Search Profile

Microsoft Azure Data Engineer Associate (DP-203) Cert Prep by Microsoft Press

Focused View

Microsoft Press and Tim Warner

9:25:15

0 View
  • 001 - Introduction.mp4
    05:20
  • 002 - Learning objectives.mp4
    00:24
  • 003 - Design an Azure Data Lake solution.mp4
    03:59
  • 004 - Recommend file types for storage.mp4
    01:56
  • 005 - Recommend file types for analytical queries.mp4
    01:01
  • 006 - Design for efficient querying.mp4
    15:47
  • 007 - Learning objectives.mp4
    00:24
  • 008 - Design a folder structure that represents levels of data transformation.mp4
    02:32
  • 009 - Design a distribution strategy.mp4
    02:56
  • 010 - Design a data archiving solution.mp4
    20:41
  • 011 - Learning objectives.mp4
    00:36
  • 012 - Design a partition strategy for files.mp4
    03:58
  • 013 - Design a partition strategy for analytical workloads.mp4
    01:57
  • 014 - Design a partition strategy for efficiency and performance.mp4
    02:56
  • 015 - Design a partition strategy for Azure Synapse Analytics.mp4
    02:02
  • 016 - Identify when partitioning is needed in Azure Data Lake Storage Gen2.mp4
    23:46
  • 017 - Learning objectives.mp4
    00:40
  • 018 - Design star schemas.mp4
    03:13
  • 019 - Design slowly changing dimensions.mp4
    02:23
  • 020 - Design a dimensional hierarchy.mp4
    00:28
  • 021 - Design a solution for temporal data.mp4
    01:16
  • 022 - Design for incremental loading.mp4
    00:51
  • 023 - Design analytical stores.mp4
    02:18
  • 024 - Design metastores in Azure Synapse Analytics and Azure Databricks.mp4
    24:10
  • 025 - Learning objectives.mp4
    00:29
  • 026 - Implement compression.mp4
    01:42
  • 027 - Implement partitioning.mp4
    00:42
  • 028 - Implement sharding.mp4
    00:18
  • 029 - Implement different table geometries with Azure Synapse Analytics pools.mp4
    02:03
  • 030 - Implement data redundancy.mp4
    03:55
  • 031 - Implement distributions.mp4
    00:27
  • 032 - Implement data archiving.mp4
    13:19
  • 033 - Learning objectives.mp4
    00:30
  • 034 - Build a temporal data solution.mp4
    00:49
  • 035 - Build a slowly changing dimension.mp4
    00:53
  • 036 - Build a logical folder structure.mp4
    01:03
  • 037 - Build external tables.mp4
    02:31
  • 038 - Implement file and folder structures for efficient querying and data pruning.mp4
    10:00
  • 039 - Learning objectives.mp4
    00:30
  • 040 - Deliver data in a relational star schema.mp4
    01:34
  • 041 - Deliver data in Parquet files.mp4
    00:46
  • 042 - Maintain metadata.mp4
    00:33
  • 043 - Implement a dimensional hierarchy.mp4
    12:43
  • 044 - Learning objectives.mp4
    00:31
  • 045 - Transform data by using Apache Spark.mp4
    02:29
  • 046 - Transform data by using Transact-SQL.mp4
    01:05
  • 047 - Transform data by using Data Factory.mp4
    01:35
  • 048 - Transform data by using Azure Synapse pipelines.mp4
    01:24
  • 049 - Transform data by using Stream Analytics.mp4
    19:50
  • 050 - Learning objectives.mp4
    00:32
  • 051 - Cleanse data.mp4
    02:39
  • 052 - Split data.mp4
    01:40
  • 053 - Shred JSON.mp4
    02:03
  • 054 - Encode and decode data.mp4
    09:35
  • 055 - Learning objectives.mp4
    00:33
  • 056 - Configure error handling for the transformation.mp4
    01:38
  • 057 - Normalize and denormalize values.mp4
    02:19
  • 058 - Transform data by using Scala.mp4
    01:22
  • 059 - Perform data exploratory analysis.mp4
    13:15
  • 060 - Learning objectives.mp4
    00:46
  • 061 - Develop batch processing solutions by using Data Factory, Data Lake, Spark, Azure Syn.mp4
    01:14
  • 062 - Create data pipelines.mp4
    02:00
  • 063 - Design and implement incremental data loads.mp4
    01:20
  • 064 - Design and develop slowly changing dimensions.mp4
    00:36
  • 065 - Handle security and compliance requirements.mp4
    02:35
  • 066 - Scale resources.mp4
    21:11
  • 067 - Learning objectives.mp4
    00:37
  • 068 - Configure the batch size.mp4
    02:26
  • 069 - Design and create tests for data pipelines.mp4
    03:31
  • 070 - Integrate Jupyter and Python Notebooks into a data pipeline.mp4
    01:15
  • 071 - Handle duplicate data.mp4
    00:23
  • 072 - Handle missing data.mp4
    00:36
  • 073 - Handle late-arriving data.mp4
    07:39
  • 074 - Learning objectives.mp4
    00:39
  • 075 - Upsert data.mp4
    01:52
  • 076 - Regress to a previous state.mp4
    02:14
  • 077 - Design and configure exception handling.mp4
    01:44
  • 078 - Configure batch retention.mp4
    01:02
  • 079 - Revisit batch processing solution design.mp4
    01:16
  • 080 - Debug Spark jobs by using the Spark UI.mp4
    24:55
  • 081 - Learning objective.mp4
    00:46
  • 082 - Develop a stream processing solution by using Stream Analytics, Azure Databricks, and.mp4
    01:53
  • 083 - Process data by using Spark structured streaming.mp4
    01:52
  • 084 - Monitor for performance and functional regressions.mp4
    01:34
  • 085 - Design and create windowed aggregates.mp4
    01:50
  • 086 - Handle schema drift.mp4
    21:50
  • 087 - Learning objectives.mp4
    00:47
  • 088 - Process time series data.mp4
    01:53
  • 089 - Process across partitions.mp4
    02:09
  • 090 - Process within one partition.mp4
    01:00
  • 091 - Configure checkpoints and watermarking during processing.mp4
    01:02
  • 092 - Scale resources.mp4
    01:49
  • 093 - Design and create tests for data pipelines.mp4
    01:20
  • 094 - Optimize pipelines for analytical or transactional purposes.mp4
    15:26
  • 095 - Learning objectives.mp4
    00:28
  • 096 - Handle interruptions.mp4
    01:41
  • 097 - Design and configure exception handling.mp4
    00:41
  • 098 - Upsert data.mp4
    01:25
  • 099 - Replay archived stream data.mp4
    01:49
  • 100 - Design a stream processing solution.mp4
    09:53
  • 101 - Learning objectives.mp4
    00:34
  • 102 - Trigger batches.mp4
    01:53
  • 103 - Handle failed batch loads.mp4
    01:50
  • 104 - Validate batch loads.mp4
    00:45
  • 105 - Manage data pipelines in Data Factory and Synapse pipelines.mp4
    01:16
  • 106 - Schedule data pipelines in Data Factory and Synapse pipelines.mp4
    00:22
  • 107 - Implement version control for pipeline artifacts.mp4
    00:56
  • 108 - Manage Spark jobs in a pipeline.mp4
    12:01
  • 109 - Learning objectives.mp4
    00:26
  • 110 - Design data encryption for data at rest and in transit.mp4
    01:57
  • 111 - Design a data auditing strategy.mp4
    00:36
  • 112 - Design a data masking strategy.mp4
    01:17
  • 113 - Design for data privacy.mp4
    11:40
  • 114 - Learning objectives.mp4
    00:37
  • 115 - Design a data retention policy.mp4
    01:22
  • 116 - Design to purge data based on business requirements.mp4
    01:06
  • 117 - Design Azure RBAC and POSIX-like ACL for Data Lake Storage Gen2.mp4
    01:37
  • 118 - Design row-level and column-level security.mp4
    14:33
  • 119 - Learning objectives.mp4
    00:40
  • 120 - Implement data masking.mp4
    01:49
  • 121 - Encrypt data at rest and in motion.mp4
    01:40
  • 122 - Implement row-level and column-level security.mp4
    00:18
  • 123 - Implement Azure RBAC.mp4
    01:31
  • 124 - Implement POSIX-like ACLs for Data Lake Storage Gen2.mp4
    00:54
  • 125 - Implement a data retention policy.mp4
    00:21
  • 126 - Implement a data auditing strategy.mp4
    15:28
  • 127 - Learning objectives.mp4
    00:40
  • 128 - Manage identities, keys, and secrets across different data platforms.mp4
    02:20
  • 129 - Implement secure endpoints Private and public.mp4
    01:38
  • 130 - Implement resource tokens in Azure Databricks.mp4
    01:34
  • 131 - Load a DataFrame with sensitive information.mp4
    00:54
  • 132 - Write encrypted data to tables or Parquet files.mp4
    00:34
  • 133 - Manage sensitive information.mp4
    16:51
  • 134 - Learning objectives.mp4
    00:30
  • 135 - Implement logging used by Azure Monitor.mp4
    01:09
  • 136 - Configure monitoring services.mp4
    01:15
  • 137 - Measure performance of data movement.mp4
    00:57
  • 138 - Monitor and update statistics about data across a system.mp4
    01:12
  • 139 - Monitor data pipeline performance.mp4
    00:13
  • 140 - Measure query performance.mp4
    10:15
  • 141 - Learning objectives.mp4
    00:34
  • 142 - Monitor cluster performance.mp4
    01:32
  • 143 - Understand custom logging options.mp4
    01:34
  • 144 - Schedule and monitor pipeline tests.mp4
    01:58
  • 145 - Interpret Azure Monitor metrics and logs.mp4
    01:21
  • 146 - Interpret a Spark Directed Acyclic Graph (DAG).mp4
    16:44
  • 147 - Learning objectives.mp4
    00:32
  • 148 - Compact small files.mp4
    01:09
  • 149 - Rewrite user-defined functions (UDFs).mp4
    01:26
  • 150 - Handle skew in data.mp4
    01:50
  • 151 - Handle data spill.mp4
    01:29
  • 152 - Tune shuffle partitions.mp4
    01:07
  • 153 - Find shuffling in a pipeline.mp4
    00:21
  • 154 - Optimize resource management.mp4
    12:00
  • 155 - Learning objectives.mp4
    00:31
  • 156 - Tune queries by using indexers.mp4
    01:53
  • 157 - Tune queries by using cache.mp4
    00:55
  • 158 - Optimize pipelines for analytical or transactional purposes.mp4
    01:38
  • 159 - Optimize pipeline for descriptive versus analytical workloads.mp4
    01:28
  • 160 - Troubleshoot failed Spark jobs.mp4
    00:30
  • 161 - Troubleshoot failed pipeline runs.mp4
    01:14
  • 162 - Summary.mp4
    02:08
  • Description


    In this course, Microsoft MVP Tim Warner walks you through what to expect on the DP-203 Data Engineering on Microsoft Azure exam, covering every Exam DP-203 objective in a friendly and logical way. Tim dives into the intricacies of data engineering on Microsoft Azure, focusing on deploying efficient, secure, and robust data processing solutions. Learn how to design and implement diverse data storage strategies, including leveraging Azure Synapse Analytics for managing massive datasets efficiently. Discover techniques for data compression, partitioning, and sharding to optimize storage and access speed. Investigate table geometries, data redundancy, and archival methods to ensure data is both accessible and protected. Ideal for IT professionals, data scientists, and anyone interested in the data engineering capabilities of Azure, this course empowers you to build scalable data solutions and ensure that your data-driven applications perform seamlessly.

    More details


    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Category
    Microsoft Press and Tim Warner
    Microsoft Press and Tim Warner
    Instructor's Courses
    LinkedIn Learning is an American online learning provider. It provides video courses taught by industry experts in software, creative, and business skills. It is a subsidiary of LinkedIn. All the courses on LinkedIn fall into four categories: Business, Creative, Technology and Certifications. It was founded in 1995 by Lynda Weinman as Lynda.com before being acquired by LinkedIn in 2015. Microsoft acquired LinkedIn in December 2016.
    • language english
    • Training sessions 162
    • duration 9:25:15
    • English subtitles has
    • Release Date 2024/12/14