Companies Home Search Profile

Apache Druid for Data Engineers (Hands-On)

Focused View

Bigdata Engineer

2:23:29

5 View
  • 1. Introduction to Course.mp4
    08:57
  • 2. Real-time Analytics Databases.mp4
    01:21
  • 3. What is Apache Druid.mp4
    02:15
  • 4. Key Features of Druid.mp4
    05:18
  • 5. Technology.mp4
    09:57
  • 6. Use cases.mp4
    10:32
  • 7. When to use Druid.mp4
    02:01
  • 8. When not to use Druid.mp4
    01:21
  • 9. List of Company using Apache Druid.mp4
    03:44
  • 1. Installation.mp4
    05:56
  • 2. Start up Druid services.mp4
    03:56
  • 3. Open the web console.mp4
    01:53
  • 4. Load data.mp4
    06:35
  • 5. Query data.mp4
    03:04
  • 6. Overview of the Druid Web Console Part 1.mp4
    06:44
  • 7. Overview of the Druid Web Console Part 2.mp4
    03:37
  • 1. Druid Architecture.mp4
    02:09
  • 2. Druid Servers.mp4
    02:54
  • 3. Druid Services.mp4
    07:10
  • 4. Druid Services in Simple terms.mp4
    04:20
  • 5. External Dependencies.mp4
    04:00
  • 1. Datasources and Segments.mp4
    03:11
  • 2. Segment Identifiers.mp4
    00:46
  • 1. Introduction to Segments.mp4
    02:24
  • 2. Segment File Structure.mp4
    01:49
  • 1. Load Data from Local Files.mp4
    06:43
  • 2. Load Data from URI.mp4
    04:14
  • 3. Load Data from Kafka (Prerequisite Introduction to Kafka).mp4
    01:17
  • 4. Installing Single Node Kafka Cluster.html
  • 5. Change the following to avoid Zookeeper Issue conflict.html
  • 6.1 Druid Kafka.txt
  • 6. Load Data from Kafka Part 1.mp4
    08:52
  • 7. Load Data from Kafka Part 2.mp4
    03:48
  • 8. Query Data Explain Plan.mp4
    01:52
  • 9.1 Aggregate data with rollup.txt
  • 9. Aggregate data with rollup.mp4
    10:49
  • 1. Is Druid a data warehouse When should I use Druid over RedshiftBigQuery.html
  • 2. Is Druid log aggregationlog search system When should I use Druid over Elastic.html
  • 3. Is Druid a timeseries database When should I use Druid.html
  • 4. Does Druid separate storage and compute.html
  • 5. How is Druid deployed.html
  • 6. Where does Druid fit in my big data stack.html
  • 7. Is Druid in-memory.html
  • 8. Do clients ever contact the Coordinator process.html
  • 9. Does it matter if Coordinator process starts up before or after other processes.html
  • Description


    Learn everything about Apache Druid a modern real-time analytics database.

    What You'll Learn?


    • Understanding of basic architecture of Apache Druid
    • Installing and Configuring Apache Druid
    • Apache Druid Design, Ingestion, Data management, Querying
    • Frequently asked Questions

    Who is this for?


  • Database Engineer, Big Data Engineer, Data Engineer, Data Analyst, Data Scientist, Machine Learning Engineer
  • What You Need to Know?


  • Basic knowledge of SQL is appreciated but if you don't have any knowledge on Database management its fine.
  • Linux as Operating System Required
  • 8 GB RAM is required
  • More details


    Description

    Druid is a high-performance, real-time analytics database that delivers sub-second queries on streaming and batch data at scale and under load.

    Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics ("OLAP" queries) on large data sets. Most often, Druid powers use cases where real-time ingestion, fast query performance, and high uptime are important.

    Druid is commonly used as the database backend for GUIs of analytical applications, or for highly-concurrent APIs that need fast aggregations. Druid works best with event-oriented data.


    One of the most valuable technology skills is the ability to Real-time analytics databases handle analytics on large amounts of data by optimizing resources to enable compute-heavy workloads, and this course is specifically designed to bring you up to speed on one of the best technologies for this task, Apache Duid! The top technology companies like Google, Facebook, Netflix, Airbnb, Amazon, NASA, and more are all using Apache Druid!


    Apache Druid Essentials: Unleashing Real-time Analytics and Scalable Data Exploration

    Unlock the potential of real-time analytics and scalable data exploration with our comprehensive Apache Druid Essentials course. In this dynamic program, participants will delve into the world of Apache Druid, an open-source, high-performance analytics database designed for fast query response and seamless scalability.


    Key Learning Objectives:

    • Introduction to Course

    • Real-time Analytics Databases

    • What is Apache Druid?

    • Key Features of Druid

    • Technology

    • Use cases

    • When to use Druid

    • When not to use Druid

    • List of Company using Apache Druid

    • Installation of Apache Druid

    • Start up Druid services

    • Open the web console

    • Load data

    • Query data

    • Overview of the Druid Web Console

    • Architecture of Druid

    • Druid Servers

    • External Dependencies

    • Storage Design

    • Datasources and Segments

    • Segment Identifiers

    • Segments

    • Introduction to Segments

    • Segment File Structure

    • Data Loading in Druid

    • Load Data from Local Files

    • Load Data from URI

    • Load Data from Kafka (Prerequisite Introduction to Kafka)

    • Installing Single Node Kafka Cluster

    • Change the following to avoid Zookeeper Issue conflict

    • Load Data from Kafka

    • Query Data Explain Plan

    • Aggregate data with rollup

    • Frequently Asked Questions


    Who this course is for:

    • Database Engineer, Big Data Engineer, Data Engineer, Data Analyst, Data Scientist, Machine Learning Engineer

    User Reviews
    Rating
    0
    0
    0
    0
    0
    average 0
    Total votes0
    Focused display
    Category
    Bigdata Engineer
    Bigdata Engineer
    Instructor's Courses
    I am Solution Architect with 12+ year’s of experience in Banking, Telecommunication and Financial Services industry across a diverse range of roles in Credit Card, Payments, Data Warehouse and Data Center programmes My role as Bigdata and Cloud Architect to work as part of Bigdata team to provide Software Solution.Responsibilities includes,- Support all Hadoop related issues- Benchmark existing systems, Analyse existing system challenges/bottlenecks and Propose right solutions to eliminate them based on various Big Data technologies- Analyse and Define pros and cons of various technologies and platforms- Define use cases, solutions and recommendations- Define Big Data strategy- Perform detailed analysis of business problems and technical environments- Define pragmatic Big Data solution based on customer requirements analysis- Define pragmatic Big Data Cluster recommendations- Educate customers on various Big Data technologies to help them understand pros and cons of Big Data- Data Governance- Build Tools to improve developer productivity and implement standard practicesI am sure the knowledge in these courses can give you extra power to win in life.All the best!!
    Students take courses primarily to improve job-related skills.Some courses generate credit toward technical certification. Udemy has made a special effort to attract corporate trainers seeking to create coursework for employees of their company.
    • language english
    • Training sessions 32
    • duration 2:23:29
    • Release Date 2024/03/01