内容简介:For HA and easier management of our containers we can create swarm cluster.It is composed of couple nodes with docker engine installed.How to setup swarm and use it?
For HA and easier management of our containers we can create swarm cluster.
It is composed of couple nodes with docker engine installed.
How to setup swarm and use it?
Example configuration and prerequisites
In my lab environment I used three nodes:
Role in Swarm | Server | IP |
---|---|---|
Manager | docker-host1.lukas.int | 10.10.10.20 |
Worker1 | docker-host2.lukas.int | 10.10.10.21 |
Worker2 | docker-host3.lukas.int | 10.10.10.22 |
Each node has got docker engine installed with standard procedure as for single node docker.
Ports that are need to be open on each node:
TCP/UDP | Port | What is for? |
---|---|---|
TCP | 2377 | cluster management |
TCP and UDP | 7946 | node communication |
UDP | 4789 | overlay network |
Create swarm cluster
Log in to manager node.
Most importatnt parameters for docker swarm init
:
-
--autolock
-true
orfalse
- enable autolock for manager node - more later in this post -
--advertise-addr
- address for API and overlay networks for other cluster members -
--listen-addr
- address for cluster management -
--availability
- cloud beactive
(new tasks allowed),pause
(new tasks not allowed, but old keeps going) ordrain
(new tasks not allowed, finish immediately old ones if they exists) - set drain for manager node to create manager without containers on it -
--default-addr-pool
- by default10.10.10.0/8
- we can set another address pool for overlay network created with cluster -
--default-addr-pool-mask-len
- number on subnets that are allow to be created in--default-addr-pool
[lukas@docker-host1 ~]$ docker swarm init --advertise-addr 10.10.10.20 --listen-addr 10.10.10.20 Swarm initialized: current node (x7mdjbmfz3ttkxbvelriix4yz) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0zwqchv43d2alvl99fgw2mh6cnv7xc8vakspd90yjh4i7eiwwx-cn32plktlwjf8q25azk37ij5v 10.10.10.20:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
In output we get command to add worker node to cluster.
Get tokens for adding nodes to cluster
If we need tokens later and we didn’t write them down we can get them from any manager node.
Manager token
[lukas@docker-host1 ~]$ docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-0zwqchv43d2alvl99fgw2mh6cnv7xc8vakspd90yjh4i7eiwwx-956h96o5ivrekolr42k71bgqc 10.10.10.20:2377
Worker command
[lukas@docker-host1 ~]$ docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0zwqchv43d2alvl99fgw2mh6cnv7xc8vakspd90yjh4i7eiwwx-cn32plktlwjf8q25azk37ij5v 10.10.10.20:2377
Important!
Tokens should be secured well, they give access to cluster.
If they leaked somewhere, we should change them with command(we can rotate for worker or manager):
[lukas@docker-host1 ~]$ docker swarm join-token --rotate worker Successfully rotated worker join token. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0zwqchv43d2alvl99fgw2mh6cnv7xc8vakspd90yjh4i7eiwwx-c7r78dy7kwho1zq3uq7w26xhl 10.10.10.20:2377
Add workers
I will add my docker-host2
and docker-host3
as workers.
[lukas@docker-host2 ~]$ docker swarm join --token SWMTKN-1-0zwqchv43d2alvl99fgw2mh6cnv7xc8vakspd90yjh4i7eiwwx-cn32plktlwjf8q25azk37ij5v 10.10.10.20:2377 This node joined a swarm as a worker.
[lukas@docker-host3 ~]$ docker swarm join --token SWMTKN-1-0zwqchv43d2alvl99fgw2mh6cnv7xc8vakspd90yjh4i7eiwwx-cn32plktlwjf8q25azk37ij5v 10.10.10.20:2377 This node joined a swarm as a worker.
List nodes
We can see that manager node is named as leader
.
Listing is available only from manager node.
[lukas@docker-host1 ~]$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION x7mdjbmfz3ttkxbvelriix4yz * docker-host1.lukas.int Ready Active Leader 19.03.8 va779vfr6v115025a15yvuntv docker-host2.lukas.int Ready Active 19.03.8 k61h5c7sqzjepifp6lzjxpr95 docker-host3.lukas.int Ready Active 19.03.8
What each type of node do?
Manager
- sends tasks to worker nodes(or manager nodes also if we allow container there)
- sends heartbeat to nodes in cluster to be current with cluster state
- serves API for management
It is strongly recommended to have at least three manager nodes in production cluster for HA. This configuration will survive lost of one manager node.
In such configuration still only one of managers will be leader, rest will replicates data from him and waits for his breakdown.
Worker
This type of nodes only hosts scheduled containers.
It can by promoted to manager with docker node promote
command.
Inspect node details
[lukas@docker-host1 ~]$ docker node inspect docker-host2.lukas.int --pretty ID: va779vfr6v115025a15yvuntv Hostname: docker-host2.lukas.int Joined at: 2020-04-24 13:44:41.431444118 +0000 utc Status: State: Ready Availability: Active Address: 10.10.10.21 Platform: Operating System: linux Architecture: x86_64 Resources: CPUs: 2 Memory: 1.786GiB Plugins: Log: awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog Network: bridge, host, ipvlan, macvlan, null, overlay Volume: local Engine Version: 19.03.8 TLS Info: TrustRoot: -----BEGIN CERTIFICATE----- <cert here> -----END CERTIFICATE----- Issuer Subject: MBMxETAPBgNVBAMTCHN3YXJtLWNh Issuer Public Key: <public key here>
Without --pretty
flag we get raw json describing node. We can get exact information from it with --format
flag.
Check node role
[lukas@docker-host1 ~]$ docker inspect docker-host1.lukas.int --format "{{.Spec.Role}}" manager
Check node status
[lukas@docker-host1 ~]$ docker inspect docker-host1.lukas.int --format "{{.Status.State}}" ready
Changing node parameters
[lukas@docker-host1 ~]$ docker node update --availability drain docker-host3.lukas.int docker-host3.lukas.int [lukas@docker-host1 ~]$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION x7mdjbmfz3ttkxbvelriix4yz * docker-host1.lukas.int Ready Active Leader 19.03.8 va779vfr6v115025a15yvuntv docker-host2.lukas.int Ready Active 19.03.8 k61h5c7sqzjepifp6lzjxpr95 docker-host3.lukas.int Ready Drain 19.03.8
Promoting/demoting node
Promote
[lukas@docker-host1 ~]$ docker node promote docker-host3.lukas.int Node docker-host3.lukas.int promoted to a manager in the swarm. [lukas@docker-host1 ~]$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION x7mdjbmfz3ttkxbvelriix4yz * docker-host1.lukas.int Ready Active Leader 19.03.8 va779vfr6v115025a15yvuntv docker-host2.lukas.int Ready Active 19.03.8 nqe3eyk55lvmwsmqy5lx8v8vx docker-host3.lukas.int Ready Active Reachable 19.03.8
Demote
[lukas@docker-host1 ~]$ docker node demote docker-host3.lukas.int Manager docker-host3.lukas.int demoted in the swarm.
Leave cluster
From leaving node:
[lukas@docker-host2 ~]$ docker swarm leave Node left the swarm.
It is possible to use flag --force
if you want evict manager node.
From manager - node that will be removed should be shutted down before:
[lukas@docker-host1 ~]$ docker node rm docker-host2.lukas.int docker-host2.lukas.int
Locking swarm cluster
All managers of swarm has got copy of TLS encryption keys.
If we want to protect these keys we can enable autolock
feature.
When enabled, after node restart his configuration data needs to be decrypted with special key before starting any service on it.
Enable autolock
You can enable autolock
at cluster initialization with docker swarm init
or later with docker swarm update
.
[lukas@docker-host1 ~]$ docker swarm update --autolock=true Swarm updated. To unlock a swarm manager after it restarts, run the `docker swarm unlock` command and provide the following key: SWMKEY-1-fFGBM6X97rFMI/2NHn3meJGH+j5lhLT1VnzgeQcZBAU Please remember to store this key in a password manager, since without it you will not be able to restart the manager.
Testing autolock
I will promote docker-host2.lukas.int
to manager.
[lukas@docker-host1 ~]$ docker node promote docker-host2.lukas.int Node docker-host2.lukas.int promoted to a manager in the swarm. [lukas@docker-host1 ~]$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION x7mdjbmfz3ttkxbvelriix4yz * docker-host1.lukas.int Ready Active Leader 19.03.8 va779vfr6v115025a15yvuntv docker-host2.lukas.int Ready Active Reachable 19.03.8 nqe3eyk55lvmwsmqy5lx8v8vx docker-host3.lukas.int Ready Active 19.03.8
Restart docker-host2.lukas.int
- only docker daemon - will be enough.
[root@docker-host2 ~]# systemctl restart docker
Check any docker command on docker-host2.lukas.int
.
[lukas@docker-host2 ~]$ docker node ls Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use "docker swarm unlock" to unlock it.
Unlock with key generated before:
[lukas@docker-host2 ~]$ docker swarm unlock Please enter unlock key:
Check if node started correctly:
[lukas@docker-host2 ~]$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION x7mdjbmfz3ttkxbvelriix4yz docker-host1.lukas.int Ready Active Leader 19.03.8 va779vfr6v115025a15yvuntv * docker-host2.lukas.int Ready Active Reachable 19.03.8 nqe3eyk55lvmwsmqy5lx8v8vx docker-host3.lukas.int Ready Active 19.03.8
Check unlock key
[lukas@docker-host1 ~]$ docker swarm unlock-key To unlock a swarm manager after it restarts, run the `docker swarm unlock` command and provide the following key: SWMKEY-1-fFGBM6X97rFMI/2NHn3meJGH+j5lhLT1VnzgeQcZBAU Please remember to store this key in a password manager, since without it you will not be able to restart the manager.
Rotate key
[lukas@docker-host1 ~]$ docker swarm unlock-key --rotate Successfully rotated manager unlock key. To unlock a swarm manager after it restarts, run the `docker swarm unlock` command and provide the following key: SWMKEY-1-RsrHxoajv/57yi9Eng7SDcDS7TGxcWI0MqKrpMTlpWs Please remember to store this key in a password manager, since without it you will not be able to restart the manager.
Swarm Services
All operations on swarm services will be done from manager node.
Create service
If you plan to create service with image from login-secured registry, you should first login:
[lukas@docker-host1 ~]$ docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: <username> Password: <password> WARNING! Your password will be stored unencrypted in /home/lukas/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
Then you can crate service with optional --with-registry-auth
flag - docker client will copy your authentication information securely to all nodes that will be pulling image for your service.
Important!
It is strongly advised to not use images names without tag or with latest
tag.
When Swarm is creating service it resolves name of image and tag into digest hash, from that moment in service configuration resides only hash pointing to image that was tagged latest
at moment of issuing docker service create
command.
Using latest
tag in service description can lead to mistakes because of fact, that latest
tag moves always to newest software version.
Always use explicit images names like ubuntu:19.04
We will create docker service named web_server
, there will be two containers deployed around our docker swarm cluster, each of this containers will expose his port 80 on 80 port of docker host machine(it is important to check that port is available on every docker host from swarm cluster).
Containers will be hosting Apache version 2.4
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --replicas=2 --publish 80:80 httpd:2.4 h6ttr4yuoukc8zhwo4a5oin33 overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged
As in docker run
we can add parameters that will change containers start behaviour or override some image settings like:
--dns --entrypoint --env --workdir --user
Full list available with command docker service create -h
.
Templating service
When creating service there is possibility to set some parameters to contaiers based on service metadata.
Parameters that we can use with templates:
--hostname --mount --env
Possible templates:
.Service.ID .Service.Name .Service.Labels .Node.ID .Node.Hostname .Task.Name .Task.Slot
We want to set hostname in containers that it will correspond to service name:
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --replicas=6 --publish target=80,published=80 --hostname="{{.Node.ID}}-{{.Service.Name}}" httpd:2.4 x0zrp0zkls6v3wfue39638lev overall progress: 6 out of 6 tasks 1/6: running [==================================================>] 2/6: running [==================================================>] 3/6: running [==================================================>] 4/6: running [==================================================>] 5/6: running [==================================================>] 6/6: running [==================================================>] verify: Service converged
Check hostname of random container:
[lukas@docker-host2 ~]$ docker exec -it f3cd22cef724 bash root@va779vfr6v115025a15yvuntv-web_server:/usr/local/apache2# hostname va779vfr6v115025a15yvuntv-web_server
List services
[lukas@docker-host1 ~]$ docker service ls ID NAME MODE REPLICAS IMAGE PORTS h6ttr4yuoukc web_server replicated 2/2 httpd:2.4 *:80->80/tcp
List service details
[lukas@docker-host1 ~]$ docker service ps h6ttr4yuoukc ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS zdv7uqomaz29 web_server.1 httpd:2.4 docker-host3.lukas.int Running Running about a minute ago fe36zt5e5zsz web_server.2 httpd:2.4 docker-host2.lukas.int Running Running 59 seconds ago
Check service response - ingress mode
We can easily test apache with curl
and GET method.
[lukas@docker-host1 ~]$ curl -X GET 127.0.0.1:80 <html><body><h1>It works!</h1></body></html>
[lukas@docker-host2 ~]$ curl -X GET 127.0.0.1:80 <html><body><h1>It works!</h1></body></html>
[lukas@docker-host3 ~]$ curl -X GET 127.0.0.1:80 <html><body><h1>It works!</h1></body></html>
As we see despite that containers are only on docker-host2
and docker-host3
machines - service that they serve is available on every node in cluster at port 80. This is default called ingress
mode - all calls on port 80 on every node in swarm cluster are redirected automatically to nodes with containers serving service.
Set port publishing in host mode
If you want to publish ports of service only on nodes where containers from services resides you should deploy it with mode=host
parameter in --publish
flag.
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --replicas=2 --publish mode=host,target=80,published=80 httpd:2.4 s7qjpd70u19ttk49jbgu647p5 overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged [lukas@docker-host1 ~]$ docker service ps s7qjpd70u19t ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS tic5zzp4s0ry web_server.1 httpd:2.4 docker-host2.lukas.int Running Running 33 seconds ago *:80->80/tcp bjz1zlnpxh1v web_server.2 httpd:2.4 docker-host3.lukas.int Running Running 33 seconds ago *:80->80/tcp
Let’s check where we can connect to our service.
[lukas@docker-host1 ~]$ curl -X GET 127.0.0.1:80 curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
[lukas@docker-host2 ~]$ curl -X GET 127.0.0.1:80 <html><body><h1>It works!</h1></body></html>
[lukas@docker-host3 ~]$ curl -X GET 127.0.0.1:80 <html><body><h1>It works!</h1></body></html>
As we see, ports are available only on nodes with containers.
Mount volumes
As in standard container, in swarm we can create volumes with --mount
flag.
We mount site_content
in /var/html/www
location in every container.
This volume was not created earlier so docker will create it by himself.
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --replicas=6 --publish target=80,published=80 --mount source=site_content,target=/var/html/www httpd:2.4 fim71hxxnzic4pn9trwh77sjt overall progress: 6 out of 6 tasks 1/6: running [==================================================>] 2/6: running [==================================================>] 3/6: running [==================================================>] 4/6: running [==================================================>] 5/6: running [==================================================>] 6/6: running [==================================================>] verify: Service converged
I created six replicas to show interesting thing - we have three node cluster - so it is highly probable that on some node will be more than one container.
How many volumes will be created?
[lukas@docker-host2 ~]$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b5cb6f86ba06 httpd:2.4 "httpd-foreground" 16 seconds ago Up 11 seconds 80/tcp web_server.3.6b9hgq4cjz90aaufplpjt61er 262af192ced6 httpd:2.4 "httpd-foreground" 16 seconds ago Up 11 seconds 80/tcp web_server.6.mmkia5wsc7trnaqv9ozil3h0b [lukas@docker-host2 ~]$ docker volume ls DRIVER VOLUME NAME local site_content
Exactly one! Service assume that all containers in it do the same, if there will be more that one container per node - these containers will have common volume.
It is important to remember about this behaviour.
Connect services to networks
Create network if you don’t have one:
[lukas@docker-host1 ~]$ docker network create --driver overlay management_network brgzvb0gxkb6y8q9mgb4rurwu [lukas@docker-host1 ~]$ docker network ls NETWORK ID NAME DRIVER SCOPE [...] tverx8ooqine ingress overlay swarm brgzvb0gxkb6 management_network overlay swarm
Create service with --network
flag:
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --replicas=2 --publish target=80,published=80 --network management_network httpd:2.4 tbojbrsx89nukjh58vvknru1t overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged
Check from node with one of containers that they are connected to additional network:
[lukas@docker-host2 ~]$ docker inspect b8c2037f5235 [ { "Id": "b8c2037f5235515c464e2af9444ab252efd080e1fa23fc0595f22c4a201ef760", "Created": "2020-04-27T14:29:57.718762759Z", "Path": "httpd-foreground", [...] "Networks": { [...] "management_network": { "IPAMConfig": { "IPv4Address": "10.0.1.3" }, "Links": null, "Aliases": [ "b8c2037f5235" ], "NetworkID": "brgzvb0gxkb6y8q9mgb4rurwu", "EndpointID": "3b94e38428228bb764954b5619d928ad64a26a37cc06010741fcd7ddff9c7e61", "Gateway": "", "IPAddress": "10.0.1.3", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:0a:00:01:03", "DriverOpts": null
Global service
By then we was creating services in replicated mode where we set exact number of replicas.
If we want to have service containers on all available nodes of our cluster then we should run in with --mode=global
.
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --mode=global --publish target=80,published=80 httpd:2.4 wh27xrc33f515f8ccjkfqmhpw overall progress: 3 out of 3 tasks va779vfr6v11: running [==================================================>] nqe3eyk55lvm: running [==================================================>] x7mdjbmfz3tt: running [==================================================>] verify: Service converged [lukas@docker-host1 ~]$ docker service ps wh27xrc33f515f8ccjkfqmhpw ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS wop37gk76oy1 web_server.x7mdjbmfz3ttkxbvelriix4yz httpd:2.4 docker-host1.lukas.int Running Running 21 seconds ago dfgkhs7js37r web_server.va779vfr6v115025a15yvuntv httpd:2.4 docker-host2.lukas.int Running Running 23 seconds ago wgsf8plhlghg web_server.nqe3eyk55lvmwsmqy5lx8v8vx httpd:2.4 docker-host3.lukas.int Running Running 23 seconds ago
Test global service bahaviour
Let’s check what will happen when we remove node from cluster.
For simplicity I will set node into drain
mode described earlier.
[lukas@docker-host1 ~]$ docker node update --availability drain docker-host3.lukas.int docker-host3.lukas.int
Check service status - container on drained node is shutdown state:
[lukas@docker-host1 ~]$ docker service ps wh27xrc33f515f8ccjkfqmhpw ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS wop37gk76oy1 web_server.x7mdjbmfz3ttkxbvelriix4yz httpd:2.4 docker-host1.lukas.int Running Running 3 minutes ago dfgkhs7js37r web_server.va779vfr6v115025a15yvuntv httpd:2.4 docker-host2.lukas.int Running Running 3 minutes ago wgsf8plhlghg web_server.nqe3eyk55lvmwsmqy5lx8v8vx httpd:2.4 docker-host3.lukas.int Shutdown Shutdown 1 second ago
Now we can make our node available again:
[lukas@docker-host1 ~]$ docker node update --availability active docker-host3.lukas.int docker-host3.lukas.int
Check service - global service as soon as node became available start new container on it:
[lukas@docker-host1 ~]$ docker service ps wh27xrc33f515f8ccjkfqmhpw ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS k7l2nnjstjkn web_server.nqe3eyk55lvmwsmqy5lx8v8vx httpd:2.4 docker-host3.lukas.int Running Running less than a second ago wop37gk76oy1 web_server.x7mdjbmfz3ttkxbvelriix4yz httpd:2.4 docker-host1.lukas.int Running Running 3 minutes ago dfgkhs7js37r web_server.va779vfr6v115025a15yvuntv httpd:2.4 docker-host2.lukas.int Running Running 3 minutes ago wgsf8plhlghg web_server.nqe3eyk55lvmwsmqy5lx8v8vx httpd:2.4 docker-host3.lukas.int Shutdown Shutdown 16 seconds ago
Set specified nodes for containers
If we have got couple of nodes in our cluster but want to run service on specific ones, we can use node labeling - placement constraints.
Set label workload_type=web
on node:
[lukas@docker-host1 ~]$ docker node update --label-add workload_type=web docker-host3.lukas.int
Check labels on node:
[lukas@docker-host1 ~]$ docker node inspect --format '{{ .Spec.Labels }}' docker-host1.lukas.int map[workload_type:web]
For labeling, we can use any key and value that we want.
There is no dictionary of available keys.
Run service with flag --constraint node.labels.workload_type==web
:
[lukas@docker-host1 ~]$ docker service create --with-registry-auth --name web_server --replicas=2 --publish target=80,published=80 --constraint node.labels.workload_type==web httpd:2.4 idj6xxlcndqztnaxz2fvn7n5k overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged
We want replica mode service with two replicas. Only docker-host3
has got label on it, so both replicas will be started on this node.
[lukas@docker-host1 ~]$ docker service ps idj6xxlcndqztnaxz2fvn7n5k ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 2wrrvkgtbjks web_server.1 httpd:2.4 docker-host3.lukas.int Running Running 13 seconds ago 7z81ske7htq0 web_server.2 httpd:2.4 docker-host3.lukas.int Running Running 13 seconds ago
Scale service
Update number of replicas:
[lukas@docker-host1 ~]$ docker service scale web_server=2 web_server scaled to 2 overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged
Update service
Set update preferences
If you want to edit service with docker service update
command docker will do that by restarting containers.
You can configure how restarting procedure will look like with docker service create
or docker service update
flags:
-
--update-delay
- time gap between restarting next containers batch -
--update-parallelism
- number of containers in batch - by default: 1 -
--update-failure-action
- what to do on failure in container update process - by default:pause
- we can set therecontinue
-
--update-max-failure-ratio
- value from 0 to 1 - how many containers can fail during update process - 0.1 is considered like 10%
Setting some update preferences to service will not restart any containers.
[lukas@docker-host1 ~]$ docker service update --update-delay 5s --update-parallelism 2 web_server web_server overall progress: 6 out of 6 tasks 1/6: running [==================================================>] 2/6: running [==================================================>] 3/6: running [==================================================>] 4/6: running [==================================================>] 5/6: running [==================================================>] 6/6: running [==================================================>] verify: Service converged
You can check current preferences with docker service inspect
command:
[lukas@docker-host1 ~]$ docker service inspect web_server [...] "UpdateConfig": { "Parallelism": 2, "Delay": 5000000000, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0, "Order": "stop-first" }, [...]
Update examples
Add port publishing
[lukas@docker-host1 ~]$ docker service update --publish-add 80 web_server web_server overall progress: 2 out of 6 tasks 1/6: running [==================================================>] 2/6: running [==================================================>] 3/6: ready [======================================> ] 4/6: ready [======================================> ] 5/6: 6/6:
Remove port publishing
[lukas@docker-host1 ~]$ docker service update --publish-rm 80 web_server web_server overall progress: 0 out of 6 tasks 1/6: ready [======================================> ] 2/6: 3/6: ready [======================================> ] 4/6: 5/6: 6/6:
Add network
[lukas@docker-host1 ~]$ docker service update --network-add management_network web_server web_server overall progress: 2 out of 6 tasks 1/6: running [==================================================>] 2/6: running [==================================================>] 3/6: preparing [=================================> ] 4/6: ready [======================================> ] 5/6: 6/6:
Remove network
[lukas@docker-host1 ~]$ docker service update --network-rm management_network web_server web_server overall progress: 0 out of 6 tasks 1/6: ready [======================================> ] 2/6: 3/6: 4/6: ready [======================================> ] 5/6: 6/6:
Rollback service
To rollback last change we can use docker service update --rollback
.
Rollback example
[lukas@docker-host1 ~]$ docker service update --rollback web_server web_server rollback: manually requested rollback overall progress: rolling back update: 0 out of 6 tasks 1/6: 2/6: 3/6: 4/6: starting [=====> ] 5/6: 6/6:
Check logs for service
[lukas@docker-host1 ~]$ docker service logs web_server web_server.1.ogwy5n7dgw6l@docker-host2.lukas.int | AH00557: httpd: apr_sockaddr_info_get() failed for 2c910786bf88 web_server.1.ogwy5n7dgw6l@docker-host2.lukas.int | AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message web_server.1.ogwy5n7dgw6l@docker-host2.lukas.int | AH00557: httpd: apr_sockaddr_info_get() failed for 2c910786bf88 web_server.1.ogwy5n7dgw6l@docker-host2.lukas.int | AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message web_server.1.ogwy5n7dgw6l@docker-host2.lukas.int | [Tue Apr 28 11:53:36.436335 2020] [mpm_event:notice] [pid 1:tid 139739008033920] AH00489: Apache/2.4.43 (Unix) configured -- resuming normal operations web_server.1.ogwy5n7dgw6l@docker-host2.lukas.int | [Tue Apr 28 11:53:36.436683 2020] [core:notice] [pid 1:tid 139739008033920] AH00094: Command line: 'httpd -D FOREGROUND' web_server.2.kz0f5nire5az@docker-host3.lukas.int | AH00557: httpd: apr_sockaddr_info_get() failed for e39d1978f0fc web_server.2.kz0f5nire5az@docker-host3.lukas.int | AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message web_server.2.kz0f5nire5az@docker-host3.lukas.int | AH00557: httpd: apr_sockaddr_info_get() failed for e39d1978f0fc web_server.2.kz0f5nire5az@docker-host3.lukas.int | AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message web_server.2.kz0f5nire5az@docker-host3.lukas.int | [Tue Apr 28 11:53:36.571678 2020] [mpm_event:notice] [pid 1:tid 140383257580672] AH00489: Apache/2.4.43 (Unix) configured -- resuming normal operations web_server.2.kz0f5nire5az@docker-host3.lukas.int | [Tue Apr 28 11:53:36.572014 2020] [core:notice] [pid 1:tid 140383257580672] AH00094: Command line: 'httpd -D FOREGROUND'
Remove service
[lukas@docker-host1 ~]$ docker service rm web_server web_server
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Web开发权威指南
[美] Chris Aquino,、[美] Todd Gandee / 奇舞团 / 人民邮电出版社 / 2017-9 / 99.00元
本书在知名培训机构Big Nerd Ranch 培训教材的基础上编写而成,囊括了JavaScript、HTML5、CSS3等现代前端开发人员急需的技术关键点,包括响应式UI、访问远程Web 服务、用Ember.js 构建应用,等等。此外,还会介绍如何使用前沿开发工具来调试和测试代码,并且充分利用Node.js 和各种开源的npm 模块的强大功能来进行开发。 全书分四部分,每部分独立完成一个项......一起来看看 《Web开发权威指南》 这本书的介绍吧!