Streaming Data Changes to a Data Lake with Debezium and Delta Lake Pipeline

栏目: IT技术 · 发布时间: 6年前

内容简介:WORK-IN-PROGRESSStreaming data changes to a Data Lake with Debezium and Delta Lake pipelineThis is an example end-to-end project that demonstrates the Debezium-Delta Lake combo pipeline

WORK-IN-PROGRESS

delta-architecture

Streaming data changes to a Data Lake with Debezium and Delta Lake pipeline https://medium.com/@yinondn/streaming-data-changes-to-a-data-lake-with-debezium-and-delta-lake-pipeline-299821053dc3

This is an example end-to-end project that demonstrates the Debezium-Delta Lake combo pipeline

See medium post for more details

High Level Strategy Overview

  • Debezium reads database logs, produces json messages that describe the changes and streams them to Kafka
  • Kafka streams the messages and stores them in a S3 folder. We call it Bronze table as it stores raw messages
  • Using Spark with Delta Lake we transform the messages to INSERT, UPDATE and DELETE operations, and run them on the target data lake table. This is the table that holds the latest state of all source databases. We call it Silver table
  • Next we can perform further aggregations on the Silver table for analytics. We call it Gold table

Components

  • compose: Docker-Compose configuration that deploys containers with Debezium stack (Kafka, Zookeepr and Kafka-Connect), reads changes from the source databases and streams them to S3
  • voter-processing: Notebook with PySpark code that transforms Debezium messages to INSERT, UPDATE and DELETE operations
  • fake_it: For an end-to-end example, a simulator of a voters book application's database with live input

Instructions

Start up docker compose

  • export DEBEZIUM_VERSION=1.0
  • cd compose
  • docker-compose up -d

Config Debezium connector

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" http://localhost:8084/connectors/ -d @debezium/config.json

Run spark notebook

Import the notebook file in \voter-processing\voter-processing.html to a Databricks Community account and follow the instructions inside the notebook

https://community.cloud.databricks.com/

TODO - To complete the end-to-end example flow

  • Change the voter-processing from notebook to PySpark application
  • Add the PySpark application to the Docker-Compose
  • Change the configurations so that Kafka writes to local file system instead of S3
  • Change the Spark application so that it read Kafka's output instead of generating it's own mock data

What's Next?

Make it a configurable generic tool that can be assembled on top of any supported database


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

创业小败局

创业小败局

创业家、i黑马 / 时代华文书局 / 2014-8-1 / 42.00元

让别人的失败,成为你的成功之母! 《创业小败局》由徐小平、何伯权等六位经验丰富的业界大佬,从《创业家》五年来跟踪的数千个创业案例中,精心挑选而来。21个最具代表性的失败案例,每个案例都代表了一种最常见的失败规律,也基本上覆盖了当下中国创业浪潮中,最容易遭遇的创业陷阱。失 败是有规律的。有时候创业者的选择和 行为,必然会导致失败,但当事人却因为缺乏经验而没有察觉。比如在错误心态下引入错误的合伙......一起来看看 《创业小败局》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具