内容简介:One of the strengths of Clojure is the ability to interoperate with and harness the power of the JVM. But this interoperability comes with its baggage. For one, we can never really ignore the JVM — things like class loading, garbage collection, byte code i
One of the strengths of Clojure is the ability to interoperate with and harness the power of the JVM. But this interoperability comes with its baggage. For one, we can never really ignore the JVM — things like class loading, garbage collection, byte code interpreter, JIT compiler— for another, we simply have to respect the semantics of the JVM. In this post, we talk about one such semantic — the shutdown sequence of the JVM, how it translates to Clojure programs and how best to productionise it, while keeping business requirements unhampered during the shutdown process.
Introduction
At Helpshift, each service is deployed in horizontally scalable clusters. Each node in a cluster runs an uberjar as a Java process. Rolling deployments of new versions ensure that the service is always available. Multiple deployments are successfully executed every day.
Some services run hundreds of threads at a time in each node of the cluster. In order to ensure that the business goals of the service are not affected during deployment, we have to ensure that none of the active threads leave the system in an inconsistent state while restarting or scaling the service down.
What it means to be graceful during shutdown
The fundamental underlying assumption in a deployment strategy is that the process of shutting down the service will be graceful in that the service will ensure that there is no non-determinism in the state of the system even when the service is shutting down.
Every service is written to serve a business goal, its raison d’être. Usually there are external stimuli on which the service acts. For example, for a Kafka consumer that sends out emails, it is the incoming messages (event triggers) that act as the stimulus. For periodic jobs, it is the setting-off of a timer that acts as a stimulus for the service. For HTTP servers, it is the incoming requests. Once a stimulus is triggered, the service has to perform a set of tasks to carry out the business goal — for the Kafka consumer sending emails, it could be: looking into the payload of the Kafka message, fetching additional data for composing the email (like attachments), contacting the mail server for the recipient, handling error codes with exponential backoff and sending out alerts if emails fail. Every task entailed in sending out an email is crucial and must not fail. The invariants expected of a service must hold even when the service is shutting down. We have to make sure that each task is allowed to complete, at least upto a reasonable “safe” point, before the JVM exits.
To get to the point of a graceful shutdown strategy, every service needs to know which part of each task needs to complete before the JVM can exit. A task in a service can spawn multiple sub tasks, and sub tasks can spawn other sub tasks. Each task and sub task needs to designate a part of the work as “critical”, marked in the diagram below in red.
A service that exhibits graceful shutdown should ensure that all critical portions of tasks and subtasks complete before the JVM is allowed to exit.
The JVM point of view of asynchrony in Clojure
What we called tasks in the previous section are carried out by threads inside the JVM. In Clojure, asynchronous tasks are usually spawned using future
. Internally, futures are carried out by a thread pool managed by an ExecutorService
. Clojure futures are spawned in an unbounded thread pool, which shrinks when there are no remaining futures to run. This is done via the clojure.lang.Agent/soloExecutor
thread pool. There is another thread pool called pooledExecutor
which is a fixed-size thread pool used for short tasks mostly via agents
in Clojure.
When a future
is spawned (or a task is submitted to an agent
via send-off
), a Callable is submitted to the soloExecutor
thread pool. In Clojure, since every function implements the Runnable
and Callable
interfaces, all we are doing here is to submit it to the queue of the ExecutorService
(as we would do with Java Runnables).
The thread factory used for launching futures does not launch the threads as daemon threads. This means that the JVM will not automatically shut down as long as there is at least one active thread in the soloExecutor
(for futures) or pooledExecutor
(for agents). A standard way to shut down these two thread pools is to call shutdown-agents
which shuts down both these thread pools, which means that all currently executing tasks will finish but no new task will be accepted. A step towards a graceful shutdown of Clojure services is to ensure that we call shutdown-agents
during a shutdown (more on this later).
Let us try to peep into the JVM and see how these thread pools grow, shrink and die. In the following snippet, we create an empty leiningen
project and start a trampoline
headless REPL on it. Once it starts, we connect to the process from another leiningen
process (using the network REPL
) and launch a few futures. After a while, we call shutdown-agents
.
# in one terminal window lein new blank-app cd blank-applein trampoline repl :headless :port 9922# in another terminal window lein repl :connect 9922 (dotimes [i 10] (future (Thread/sleep 100) ::done)) (dotimes [i 10] (future (Thread/sleep 10000) ::done))# after a while... (shutdown-agents)
While this snippet runs, we can inspect the threads in the JVM using a tool like VisualVM
. The diagram below is for the entire lifecycle of the JVM. We can immediately notice that some threads live forever during the life of the JVM. Some of these threads are launched by the JVM itself, like the Reference Handler
and RMI Scheduler (0)
. Along with those, we can also see threads which facilitate the connection from one lein
process to another via the network REPL.
We also see clojure-agent-send-pool-*
and clojure-agent-send-off-pool-*
threads. The first set of threads are from the pooledExecutor
and the second is from the soloExecutor
.
The phases marked as 1, 2 and 3 correspond to the three things we did in the snippet above:
-
Phase 1 — connecting via nREPL
: When we connected to a running
lein
headless REPL, a number of things started happening as marked in phase 1 above. A fewclojure-agent-send-off-pool-*
threads got launched and a fewclojure-agent-send-pool-*
threads got launched. Since the network REPL is also written in Clojure, these threads were launched to perform nREPL-specific tasks. (Note that both thesoloExecutor
andpooledExecutor
arestatic
and hence get created when theAgent
class is loaded). -
Phase 2 — launching the first set of futures
: As soon as we launched the futures, since no available thread was present in the
soloExecutor
, ten new threads were launched. After that, the second batch of futures just reused the existing threads. -
Phase 3 — shutdown-agents
: This happened when we called
shutdown-agents
. Note that all idle threads launched by either thesoloExecutor
or thepooledExecutor
were killed by theExecutorService
framework. The only Clojure threads that were still alive were the ones that were doing some work likeclojure-agent-send-off-pool-2
. A thread dump at this time would tell us exactly what this one thread was doing. (Note: nREPL session threads are not from the Clojure pool of threads and hence they remained).
The JVM shutdown hook API
Now that we have seen the recommended way to shut down the Clojure runtime via shutdown-agents
, let us now look at the shutdown semantics of the JVM. The shutdown hook API
was designed to handle shutdown of the JVM. Before discussing shutdown hooks, it is worth mentioning that the JVM will exit when any one of the following happens:
- The last non-daemon thread exits
-
System/exit
is called from within the JVM process - The JVM is asked to shut down via an interrupt or signal from the operating system (this is how we restart a service in the deployment strategy discussed above), or a system-wide event, such as user logoff or system shutdown
-
The JVM performs an emergency exit
from within the process via
Runtime/halt
(possibly due to a catastrophic failure) - The JVM is killed forcefully by the system (eg SIGKILL or hardware failure)
When the first, second or third cases happen, the JVM starts the shutdown sequence, which means it will start running “hooks” which were registered during the non-shutdown phase of the JVM lifecycle. That is, before the JVM was shutting down, the application could register hooks with the JVM and at the time of shutdown, these registered hooks will be invoked. When all shutdown hooks have finished, the JVM process will exit.
Adding a shutdown hook f
can be done as shown in the snippet below. Here f
is a zero-arity function. Note that by creating a Thread object we are not necessarily launching an actual thread (that would happen when the thread starts
). As a side note, f
has to be a zero-arity function because functions in Clojure implement the Runnable
and Callable
interfaces and the .run
and .call
methods
on the AFn
class
is bound to invoke the zero-arity version of functions.
(.addShutdownHook (Runtime/getRuntime)
(Thread. ^Runnable f))
Registered shutdown hooks follow these rules:
- Since shutdown hooks are just functions (as far as Clojure is concerned), exceptions in shutdown hooks are handled like exceptions in any other part of the code (defaulting to the standard error output stream).
- Shutdown hooks cannot be registered or deregistered once the shutdown sequence is started .
- Once the shutdown sequence is initiated, there is no guarantee on the order in which registered shutdown hooks will be executed . They may even run concurrently.
- The JVM will wait until all shutdown hooks have completed and exit afterwards without regard to any other factors like non-terminated threads.
-
Daemon threads continue to run even while shutdown sequence is ongoing
(the same is true for non-daemon threads which are still running — if the initiation of the shutdown sequence was done due to
System/exit
or an external signal). -
Shutdown hooks can be bypassed from within the JVM by calling
Runtime/halt
(forcibly killing the process during anomalous situations or deadlocked shutdown hooks). -
Once the shutdown sequence is started, it cannot be stopped
except by calling
Runtime/halt
.
Note: Shutdown hooks may not get executed when running a normal
leiningen
process. To run shutdown hooks, we have to use trampoline or run the jar as a Java process:
https://github.com/technomancy/leiningen/issues/1854
It is advised to call shutdown-agents
in a shutdown hook
so as to properly shutdown the agent thread pools, but it does not guarantee that currently executing futures will be allowed to finish (since the JVM will exit the moment shutdown hooks have finished irrespective of anything else in the system).
(.addShutdownHook (Runtime/getRuntime) (Thread. ^Runnable shutdown-agents))
The effect of abrupt shutdown on services
When the shutdown sequence (running of all shutdown hooks) is finished, the JVM exits, without
any regard to pending tasks or currently running non-shutdown threads
.
If shutdown-agents
was called, then we might see RejectedExecutionExeception
if some codepath launches futures. This can happen, for example, in a Kafka consumer, which even after stopping, has some messages buffered in memory, and to process that message, a future
had to be spawned. If shutdown-agents
was not called during shutdown, then the tasks will just be abruptly
stopped, with no way to find out why some tasks were in an inconsistent state, which is why it is advised to call shutdown-agents
during the shutdown of the JVM to gain control over the shutdown process. This can have a severe effect on the service’s business goal. For us at Helpshift, this could mean a various number of things, depending on which service was abruptly stopped — ranging from chatbots not running as they were intended to, to time-based cron tasks not getting executed, to name a few. When these anomalies are visible to the end-user, it can be deal-breaking. Hence, having a graceful restart strategy in the design of services is as important as ensuring the service is well-designed and bug-free.
Prioritisation of threads in a system
In a production web application, there can be multiple components with threads of their own. For example, with a servlet container like Jetty
, there are threads that serve incoming requests, then there are Clojure futures running async tasks, and there are other Java threads which may be launched by any part of the system, ranging from JDBC connection pooling to auto committing offsets to a Kafka topic. Because so many things are running at the same time, when the time comes to shutdown a service, care needs to be taken so as to not abruptly stop any component.
As we discussed before, there can be multiple portions of a task that are critical and must be allowed to complete even if shutdown was initiated. We solve this by
ensuring that tasks and sub tasks which are critical are handled by a dedicated pool of threads, for which the system shutdown will halt.
Tasks which are being executed by this priority pool of threads must always be allowed to finish, even when shutdown is in progress
. This gives us a way to categorise threads in decreasing order of importance to the business context:
- p0 threads: These are the threads that respond to external stimuli — like responding to an incoming HTTP request or an event notification. These are the most important threads in our system as they directly serve the business goal , the raison d’être of the service. These threads must always be allowed to complete their current task before shutting the system down.
- p1 threads: This is the pool of threads to perform critical tasks asynchronously (or synchronously depending on the caller) — this is the second most important set of threads for the application. All critical tasks that are not directly performed by the p0 threads are to be performed by the p1 threads. The system must halt shutdown for these threads to finish their work as well.
-
Clojure future/core.async/agent threads:
These are threads which are launched by the Clojure runtime or libraries as we saw in the example above. These are to be used in a more generic context and are not strictly performing business facing tasks like the previous two sets of threads. Since Clojure threads (like via
future
) can be launched by any library used by the service, by introducing the p1 threads, we can ensure that nothing outside of the service’s own threads will ever be considered as shutdown-blocking. - Other JVM threads: There are other threads that run inside the JVM which are further cutoff from the application but are instrumental in getting background work done, for example, JDBC connection pool threads or threads which are sending metrics about the service to a monitoring system.
By differentiating threads from the perspective of importance to the business context, we can be sure that as long as there is some pending task in one of the priority pools (p0 or p1), we cannot shut the service down. So if we are able to proceed shutdown in the order of the importance of the thread pools listed above, critical tasks will not be abruptly aborted. After we stop the p0 thread pool and wait for all its current tasks to complete, we proceed to closing the p1 thread pool and wait for all its current tasks to complete and then proceed to the rest of the system. This process of shutting the system down is a trickle-down approach , where the act of shutting down spreads through the system from the critical, business-facing components to the non-critical ones and waiting where required.
In the above snippet, we are using the Claypoole
library to create an identical thread pool as the Clojure soloExecutor
but these threads will only serve requests for tasks which are critical to the business requirement of the service. The service will continue to use the normal future
primitive to launch non-critical threads and will use in-future
to launch critical threads. If this is guaranteed by the service, then we are on our way to ensure that the shutdown sequence respects the priority pools to finish all their tasks. A minor detail here to note is that we are guarding against the scenario where the p1 pool was shut down but somehow there was still something remaining to be done — we do this by running the task in the calling thread, like the CallerRunsPolicy
(but rejection handlers don’t play well with Claypoole
futures).
The ordered shutdown framework
Having defined a priority scheme for thread pools, we now need to handle the problem of shutting down threads (and other components) in a controlled and ordered fashion. The overarching philosophy is to insulate the service’s shutdown sequence from the JVM shutdown sequence by registering one shutdown hook with the JVM, which is a Clojure function that respects a predetermined order for shutting down components and thread pools. Order is important here because some threads depend on others to handoff work and only when all dependencies of a thread pool are closed, can we safely close a given pool.
;; here the key determines the order of shutdown:(add-shutdown-hook :clojure.core/shutdown-agents shutdown-agents) (add-shutdown-hook :api.server/stop stop-api-server)
Along with the ordered shutdown sequence, we also have to make sure that the JVM shutdown is halted if there are some pending tasks on our priority thread pools. The future-wait-shutdown-hook
key above is expected to be registered with a function that waits for the critical pools ( p0
and p1
) to finish and halting the shutdown sequence by sleeping till then. It should be noted here that the ordering of shutdown hooks could be implemented differently if we use libraries like component
but we still need to ensure that the shutdown sequence halts for the critical pool of threads to shrink. The main purpose of differentiating thread pools based on the importance to the business context is to facilitate a shutdown mechanism that does not hamper the business goals of the service.
The ordering of shutdown hooks is merely a means to that end.
Conclusion
The solution we proposed here is just one way to ensure that Clojure services don’t abruptly exit without ensuring critical tasks are completed. There are other solutions which can harness the concurrent nature of shutdown hooks and speed up the shutdown process, but that would incur the overhead of synchronisation across the shutdown hooks. Alternatively, there is a different school of thought in which we do not spawn any critical asynchronous tasks at all, if there is anything that needs to be performed asynchronously, it should be pushed to separate worker processes via a distributed task queue, something like Celery , thereby nullifying the requirement to wait for any thread other than the p0 threads at the time of shutdown.
In our journey to stabilise our Clojure service deployments, we have come to the realisation that even though Clojure provides very good semantics for concurrency, we still need to understand the semantics that the JVM imposes on every program that runs on it. In this post, we looked at the shutdown semantics of the JVM and our preferred way to shutdown Clojure services. Some of the lessons we learnt along the way are:
- Differentiating between critical and non-critical tasks is important to ensure graceful shutdown of services by waiting only for the critical tasks to complete.
-
Clojure futures provide an abstraction over the
ExecutorService
thread pools injava.util.concurrent
package, it is worthwhile understand the semantics ofExecutorService
. - In an auto-scaled, auto-deployed, multi-cluster system of Clojure services, it is very important to think of shutting down the service as part of the design of the service itself .
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。