The High Cost of Splitting Related Data

栏目: IT技术 · 发布时间: 5年前

内容简介:Consider the following simple architecture:The two tables in the database are related. I use ‘related’ loosely: there could be a foreign key from one table to another, maybe a shared identifier. To generalise, it is data that tends to be combined when quer

Consider the following simple architecture:

The High Cost of Splitting Related Data

The two tables in the database are related. I use ‘related’ loosely: there could be a foreign key from one table to another, maybe a shared identifier. To generalise, it is data that tends to be combined when queried.

A common anti-pattern I see is to split the data like this:

The High Cost of Splitting Related Data

Notice how the relationship between the tables has been pushed up from the database layer to the application layer.

This is often detrimental to reliability, performance, correctness, simplicity, flexibility and speed of development.

The Unreliable Network

Consider this pattern repeated further:

The High Cost of Splitting Related Data

Here we see 11 network requests, 5 databases, and 6 servers, compared to the two network requests, single database, and server of the original.

If we consider each request to have a 99% chance of success, then the original will have a 98% (0.99 2 ) success rate and this new example will have a 90% success rate (0.99 11 ). This gets worse every time the pattern is extended.

See my article Microservices and Availability for a more detailed argument.

Loss of Functionality

This approach loses the functionality of the database, such as joins, filtering, ordering and aggregation. These must be re-implemented (often poorly) at the application layer.

For example, if two tables requires a simple join your API must fetch the results of the first table via the first API, find the relevant IDs, and request them from the second API. If you want to avoid an N+1 query the second API must now support some form of ‘multi-fetch’.

It could alternatively be implemented by denormalising the data, but that comes with its own costs and complexities.

The Interface Explosion Problem

Changes to the structure of the data can result in multiple changes to dependent APIs.

The High Cost of Splitting Related Data

This can really slow down development and cause bugs!

Incorrectness

Splitting data into multiple databases loses ACID transactions.

Short of introducing distributed transactions, any consistency between the tables has been lost and they cannot be updated atomically.

See my article Consistency is Consistently Undervalued for more thoughts on this.

Performance Crash

The ‘API’ is often a HTTP server with a JSON interface. At every step through the API stack, the TCP, HTTP and JSON serialisation performance costs must be paid.

Aggregations, filtering and joins performed at the application layer can also result in over-fetching from the database.

Why Do Developers Do This?

I think this is often an attempt to contain complexity by misapplying concepts from Object Orientated Programming.

OOP teaches that data should be private and that there should be a public interface that operates on that data. Here, tables are seen as internal data and APIs are seen as a public interface to that data. Exposing internal detail is a sin!

Without going in to a general critique of OOP, of which there are already plenty, the problem is that relational data naturally resists that kind of encapsulation. Tables are not objects!

Valid Use Cases

99% of the time, a prerequisite of having a valid use case for this is your company having the name ‘Google’, ‘Amazon’, or ‘Netflix’.

Performance is a valid but rare reason to do this. It is possible that one part of your data has a wildly different access pattern to the rest. In that case being able to independently scale or change your choice of database may be useful enough to overcome the resulting pain.

In my opinion, this is not a useful method for containing complexity. I have written Your Database as an API for some thoughts on reducing the complexity of large databases.

My advice is to keep your data together until something is about to break and there is nothing else you can do. I won’t say it’s never appropriate to do it, but splitting in this way has a high cost and should be a last resort.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Linux/Unix设计思想

Linux/Unix设计思想

甘卡兹 / 漆犇 / 人民邮电出版社 / 2012-3-28 / 39.00元

《Linux\Unix设计思想/图灵程序设计丛书》内容简介:将Linux的开发方式与Unix的原理有效地结合起来,总结出Linux与Unix软件开发中的设计原则。《Linux\Unix设计思想/图灵程序设计丛书》前8章分别介绍了Linux与Unix中9条基本的哲学准则和10条次要准则。第9章和第10章将Unix系统的设计思想与其他系统的设计思想进行了对比。最后介绍了Unix哲学准则在其他领域中的应......一起来看看 《Linux/Unix设计思想》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具