The High Cost of Splitting Related Data

栏目: IT技术 · 发布时间: 4年前

内容简介:Consider the following simple architecture:The two tables in the database are related. I use ‘related’ loosely: there could be a foreign key from one table to another, maybe a shared identifier. To generalise, it is data that tends to be combined when quer

Consider the following simple architecture:

The High Cost of Splitting Related Data

The two tables in the database are related. I use ‘related’ loosely: there could be a foreign key from one table to another, maybe a shared identifier. To generalise, it is data that tends to be combined when queried.

A common anti-pattern I see is to split the data like this:

The High Cost of Splitting Related Data

Notice how the relationship between the tables has been pushed up from the database layer to the application layer.

This is often detrimental to reliability, performance, correctness, simplicity, flexibility and speed of development.

The Unreliable Network

Consider this pattern repeated further:

The High Cost of Splitting Related Data

Here we see 11 network requests, 5 databases, and 6 servers, compared to the two network requests, single database, and server of the original.

If we consider each request to have a 99% chance of success, then the original will have a 98% (0.99 2 ) success rate and this new example will have a 90% success rate (0.99 11 ). This gets worse every time the pattern is extended.

See my article Microservices and Availability for a more detailed argument.

Loss of Functionality

This approach loses the functionality of the database, such as joins, filtering, ordering and aggregation. These must be re-implemented (often poorly) at the application layer.

For example, if two tables requires a simple join your API must fetch the results of the first table via the first API, find the relevant IDs, and request them from the second API. If you want to avoid an N+1 query the second API must now support some form of ‘multi-fetch’.

It could alternatively be implemented by denormalising the data, but that comes with its own costs and complexities.

The Interface Explosion Problem

Changes to the structure of the data can result in multiple changes to dependent APIs.

The High Cost of Splitting Related Data

This can really slow down development and cause bugs!

Incorrectness

Splitting data into multiple databases loses ACID transactions.

Short of introducing distributed transactions, any consistency between the tables has been lost and they cannot be updated atomically.

See my article Consistency is Consistently Undervalued for more thoughts on this.

Performance Crash

The ‘API’ is often a HTTP server with a JSON interface. At every step through the API stack, the TCP, HTTP and JSON serialisation performance costs must be paid.

Aggregations, filtering and joins performed at the application layer can also result in over-fetching from the database.

Why Do Developers Do This?

I think this is often an attempt to contain complexity by misapplying concepts from Object Orientated Programming.

OOP teaches that data should be private and that there should be a public interface that operates on that data. Here, tables are seen as internal data and APIs are seen as a public interface to that data. Exposing internal detail is a sin!

Without going in to a general critique of OOP, of which there are already plenty, the problem is that relational data naturally resists that kind of encapsulation. Tables are not objects!

Valid Use Cases

99% of the time, a prerequisite of having a valid use case for this is your company having the name ‘Google’, ‘Amazon’, or ‘Netflix’.

Performance is a valid but rare reason to do this. It is possible that one part of your data has a wildly different access pattern to the rest. In that case being able to independently scale or change your choice of database may be useful enough to overcome the resulting pain.

In my opinion, this is not a useful method for containing complexity. I have written Your Database as an API for some thoughts on reducing the complexity of large databases.

My advice is to keep your data together until something is about to break and there is nothing else you can do. I won’t say it’s never appropriate to do it, but splitting in this way has a high cost and should be a last resort.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

构建高可用Linux服务器(第3版)

构建高可用Linux服务器(第3版)

余洪春 / 机械工业出版社 / 2014-10 / 79.00元

《构建高可用Linux服务器(第3版)》是Linux运维领域公认的经典畅销书,是国内51CTO、IT168等知名网站和多位资深运维专家共同推荐的运维工程师必备的工具书! “酒哥”在Linux运维领域潜心实践近10年,一直在运维一线,技术和思维都紧跟时代的发展,非常清楚运维工程师们需要什么,应该学习什么。本书不仅是他近10年工作经验的结晶,同时也是他的数万名读者和数十万粉丝共同需求和集体智慧的......一起来看看 《构建高可用Linux服务器(第3版)》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

URL 编码/解码
URL 编码/解码

URL 编码/解码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具