Streaming Data Changes to a Data Lake with Debezium and Delta Lake Pipeline

栏目: IT技术 · 发布时间: 5年前

内容简介:WORK-IN-PROGRESSStreaming data changes to a Data Lake with Debezium and Delta Lake pipelineThis is an example end-to-end project that demonstrates the Debezium-Delta Lake combo pipeline

WORK-IN-PROGRESS

delta-architecture

Streaming data changes to a Data Lake with Debezium and Delta Lake pipeline https://medium.com/@yinondn/streaming-data-changes-to-a-data-lake-with-debezium-and-delta-lake-pipeline-299821053dc3

This is an example end-to-end project that demonstrates the Debezium-Delta Lake combo pipeline

See medium post for more details

High Level Strategy Overview

  • Debezium reads database logs, produces json messages that describe the changes and streams them to Kafka
  • Kafka streams the messages and stores them in a S3 folder. We call it Bronze table as it stores raw messages
  • Using Spark with Delta Lake we transform the messages to INSERT, UPDATE and DELETE operations, and run them on the target data lake table. This is the table that holds the latest state of all source databases. We call it Silver table
  • Next we can perform further aggregations on the Silver table for analytics. We call it Gold table

Components

  • compose: Docker-Compose configuration that deploys containers with Debezium stack (Kafka, Zookeepr and Kafka-Connect), reads changes from the source databases and streams them to S3
  • voter-processing: Notebook with PySpark code that transforms Debezium messages to INSERT, UPDATE and DELETE operations
  • fake_it: For an end-to-end example, a simulator of a voters book application's database with live input

Instructions

Start up docker compose

  • export DEBEZIUM_VERSION=1.0
  • cd compose
  • docker-compose up -d

Config Debezium connector

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" http://localhost:8084/connectors/ -d @debezium/config.json

Run spark notebook

Import the notebook file in \voter-processing\voter-processing.html to a Databricks Community account and follow the instructions inside the notebook

https://community.cloud.databricks.com/

TODO - To complete the end-to-end example flow

  • Change the voter-processing from notebook to PySpark application
  • Add the PySpark application to the Docker-Compose
  • Change the configurations so that Kafka writes to local file system instead of S3
  • Change the Spark application so that it read Kafka's output instead of generating it's own mock data

What's Next?

Make it a configurable generic tool that can be assembled on top of any supported database


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Spark SQL内核剖析

Spark SQL内核剖析

朱锋、张韶全、黄明 / 电子工业出版社 / 2018-8 / 69.00元

Spark SQL 是 Spark 技术体系中较有影响力的应用(Killer application),也是 SQL-on-Hadoop 解决方案 中举足轻重的产品。《Spark SQL内核剖析》由 11 章构成,从源码层面深入介绍 Spark SQL 内部实现机制,以及在实际业务场 景中的开发实践,其中包括 SQL 编译实现、逻辑计划的生成与优化、物理计划的生成与优化、Aggregation 算......一起来看看 《Spark SQL内核剖析》 这本书的介绍吧!

SHA 加密
SHA 加密

SHA 加密工具

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具