内容简介:Buzzwords in tech are a dime a dozen, and it can be difficult to know which ones actually hold water. Here’s one that, unless you’ve been living under a rock, you’re sure to have encountered: microservices. Let me start by saying this one definitely holds
Buzzwords in tech are a dime a dozen, and it can be difficult to know which ones actually hold water. Here’s one that, unless you’ve been living under a rock, you’re sure to have encountered: microservices. Let me start by saying this one definitely holds water, an olympic sized pool’s worth.
In the days of old, web applications were built with what’s called a “monolithic” structure: a system design pattern in which all the application’s functions exist within a single, deployed instance. When developers scaled their service, they deployed additional instances of this monolith. Simple, but perhaps not an efficient use of resources. It won’t be news to you that applications envelop a swath of functions, all of which service different amounts of traffic. So to scale an entire application based on the needs of one subfunction is like adding the entire spice cabinet when all you really needed was a bit more salt. How can we separate these flavors?
Enter our buzzword, microservices: a system design pattern that decouples the tightly wound functions of a monolithic application into an appropriate amount of smaller sub-applications. You can get more salt without pepper. An e-commerce website would be the classic example. To the user, the work-flow is the same: log in, browse products, add them to the cart, and submit orders all in one interface. Behind the scenes however, each step in this work-flow comprises its own microservice, all of which communicate with one another to grant the user a seamless interface on which you’ve worked so hard to supply them.
So why deconstruct the wheel into multiple, smaller wheels? I’ve touched on one reason already: scalability. Naturally, more people will view our products than actually purchase our products. Now we can scale out our product browsing service without having to also scale our ordering service, the implications of which are massive. We’re making more efficient use of the computing resources at our disposal which ultimately means that company money is spent in a manner proportional to service demand. And scalability is just the tip of the iceberg.
A microservice architecture also facilitates a more streamlined Continuous Integration/Continuous Deployment (CI/CD) of updates and patches. If you’re not familiar with CI/CD, consider a traffic circle: as opposed to bringing traffic to a halt every time in the case of an intersection with stop signs, traffic circles allow cars to be continuously integrated into the intersection. And as opposed to shutting an application service down to update, test, and redeploy it, CI/CD encompasses the software, strategies, and checks necessary to keep the traffic circle from coming to a halt. Teams that are building or updating a service won’t disrupt one another, and a new service version can be deployed alongside the previous one.
What happens when a service’s business logic demands a unique tech stack? The microservice architecture affords developers the opportunity to implement the exact technologies to meet their needs.
Finally, assuming there’s a tool to see inside the microservice network (more on that later), discrete services allow developers to better isolate failures.
Sold yet? Not so fast — distributed services have some drawbacks as well. And these can all be neatly generalized into one word: complexity. One does not simply wake up, decouple their services, and bask in glory. Migrating from a monolith or designing from scratch a microservice architecture is a beast in its own right. What constitutes its own service? How will the API of each service be formulated? What communication protocol(s) will be used? How much time will it take? How will our CI/CD pipeline change?
And for those already working in a microservice environment, what challenges do they face? The main difficulty is the lack of windows to see inside the network and monitor communications. When a request originating at the client is initiated, the server that initially handles the request oftentimes needs to communicate with another service, and another, and another, and so on. We can think of this series of communications as inter-service conversations. When a developer wants to take a closer look at these exchanges however, they are left in the dark. There is nothing built in to microservice networks that “associates” one HTTP request with another — nothing that says “request C was initiated by request B which was initiated by request A.” They all simply appear to be discrete requests.
It was this “missing link” — the absence of information connecting requests with their predecessors — that recently posed a significant challenge to our team’s ability to identify the sources of stress in a microservices application. We knew which components were handling the heaviest loads, and we knew which components had piled them there, but that information was of little use without a way of knowing which components were originating the requests in the first place.
The solution, it turned out, was implementing “context propagation” — the exchange or “propagation” of a unique identifier or “context” from one HTTP request to another. In Node.js applications, this is done through a native Node API called async hooks. Async hooks comes with a built in HTTP wrapper that meets every incoming request. If the request has no “context,” it’s assumed that the request is new and thus, a context is applied. If the request initiates any subsequent requests, the context is propagated to the new request and, because the context is unique, the requests can correctly be identified as an associated conversation. Now, if these conversations are being logged to a database, the requests can be sorted by their context and analyzed however the developer sees fit.
To offer another analogy, context propagation offered a way to essentially give each new request a baton. When that request completed its leg from one microservice to another, it passed the baton — in the form of a correlating ID — to the next request to carry. So by the time the final request — the anchor leg of the relay — completed its leg, we could know it belonged to the same “team” as every request that had carried the same baton.
The insight this data offered into the behavior of our system was as helpful as we hoped it would be, making it easier for us to debug communications and, more importantly, isolate failures. At this point, it really feels like data we can’t live without.
And if it seems like information that could be of use to any devs out there, we’d highly recommend giving context propagation a try. You’re welcome to play around with the npm package we published, for which we also built Chronos, an open-source visualization tool to help see not only communication data, but the health of your microservices as well ( https://github.com/oslabs-beta/Chronos ). And if you don’t have a microservices architecture of your own with which to test it, you can check out a basic dummy app on our Github repo.
Whatever you do, don’t look at microservices as some impenetrable black box. Like anything else, it can be made accessible when you have the right tools.
So, go forth and scale, continuously integrate and deploy, choose your dream stack, and isolate those failures!
(co-authored w/ Benjamin Mizel https://www.linkedin.com/in/ben-mizel/ )
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
着陆页:获取网络订单的关键
谢松杰 / 电子工业出版社 / 2017-1-1 / CNY 55.00
着陆页是用户点击广告后看到的第一个页面,是相关产品和服务的商业模式与营销思想的载体,是实现客户转化的关键。本书从“宏观”和“微观”两个层面对着陆页的整体框架和局部细节进行了深入的讨论,既有理论和方法,又有技术与工具,为读者呈现了着陆页从策划到技术实现的完整知识体系,帮助读者用最低的成本实现网站最高的收益。 谢松杰老师作品《网站说服力》版权输出台湾,深受两岸读者喜爱。本书是《网站说服力》的姊妹......一起来看看 《着陆页:获取网络订单的关键》 这本书的介绍吧!