docker CE on Linux示例浅析(五)服务编排 原 荐

栏目: 编程工具 · 发布时间: 5年前

概述

github项目地址: https://github.com/superwujc

尊重原创,欢迎转载,注明出处: https://my.oschina.net/superwjc/blog/3056296

历史系列:

docker CE on Linux示例浅析(一)安装与基本运行

docker CE on Linux示例浅析(二)数据存储与持久化

docker CE on Linux示例浅析(三)镜像与容器管理

docker CE on Linux示例浅析(四)swam群集配置

分布式应用程序的部署需要处理每一个逻辑层以及各层之间的关系,如前端代理,web应用,消息队列,缓存,数据库等。容器化部署为此引入服务编排的概念,用于集中控制容器的生存周期与运行参数,包括但不限于以下方面:

  • 容器部署
  • 资源控制
  • 负载均衡
  • 健康检查
  • 应用配置
  • 规模伸缩
  • 位置迁移

docker-ce原生提供了compose与stack两种方式,通过定义在配置文件中的容器运行参数对服务进行编排,配置文件的格式可以为yaml或json。本文以前端代理(nginx) + web程序(tomcat)为例,简述这两种方式的应用。

示例

环境

  • 宿主机2台:dock_host_0(192.168.9.168/24),dock_host_1(192.168.9.169/24),系统与软件环境一致,均为全新最小化安装,单物理网卡,操作系统版本CentOS Linux release 7.6.1810 (Core),内核版本3.10.0-957.12.2.el7.x86_64,关闭selinux与防火墙。
  • docker为默认安装,版本18.09.6,无其他额外设置。
  • 基础镜像为最新版CentOS 7官方镜像。
  • tomcat与jdk环境,以及nginx的配置文件与日志,均以目录的方式挂载至容器。
  • 源码包jdk-8u212-linux-x64.tar.gz与apache-tomcat-8.5.40.tar.gz,位于宿主机的/opt/目录。
  • nginx为tengine,源码编译安装。

compose方式

  1. 安装docker-compose

    https://github.com/docker/compose/releases/ 中包含docker-compose的所有版本,本文以1.24.0为例。

    下载源文件:

    [root@docker_host_0 ~]# ip addr show eth0 | sed -n '/inet /p' | awk '{print $2}'
    192.168.9.168/24
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# uname -r
    3.10.0-957.12.2.el7.x86_64
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# docker -v
    Docker version 18.09.6, build 481bc77156
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   617    0   617    0     0    540      0 --:--:--  0:00:01 --:--:--   541
    100 15.4M  100 15.4M    0     0   261k      0  0:01:00  0:01:00 --:--:--  836k
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# ll /usr/local/bin/docker-compose
    -rw-r--r-- 1 root root 16154160 May 30 23:23 /usr/local/bin/docker-compose
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# chmod u+x /usr/local/bin/docker-compose
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# which docker-compose
    /usr/local/bin/docker-compose
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# docker-compose version
    docker-compose version 1.24.0, build 0aa59064
    docker-py version: 3.7.2
    CPython version: 3.6.8
    OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018
    [root@docker_host_0 ~]#

    安装命令行补全工具:

    [root@docker_host_0 ~]# curl -L https://raw.githubusercontent.com/docker/compose/1.24.0/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 13258  100 13258    0     0  14985      0 --:--:-- --:--:-- --:--:-- 14980
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# source /etc/bash_completion.d/docker-compose
    [root@docker_host_0 ~]#
    [root@docker_host_0 ~]# docker-compose
    build    create   exec     kill     port     push     run      stop     up
    bundle   down     help     logs     ps       restart  scale    top      version
    config   events   images   pause    pull     rm       start    unpause
    [root@docker_host_0 ~]#
  2. 部署服务

    创建目录挂载的源路径:

    本例中,tomcat与jdk分别为/opt/apps/app_0/source与/opt/jdks。

    server.xml中的pattern字段用于设置默认的访问日志格式,更改为%A:%{local}p %a:%{remote}p,表示本端IP:端口 对端IP:端口,用于区分访问来源。

    [root@docker_host_0 ~]# cd /opt/
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# ls
    apache-tomcat-8.5.40.tar.gz  containerd  jdk-8u212-linux-x64.tar.gz
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# mkdir -p /opt/{apps/app_0/source,jdks}
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# tar axf apache-tomcat-8.5.40.tar.gz --strip-components=1 -C apps/app_0/source/
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# sed -i 's/pattern="%h %l %u %t/pattern="%A:%{local}p %a:%{remote}p %t/' apps/app_0/source/conf/server.xml
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# tar axf jdk-8u212-linux-x64.tar.gz -C jdks/
    [root@docker_host_0 opt]#

    编辑dockerfile:

    [root@docker_host_0 opt]# vi dockerfile-for-nginx
    FROM centos:latest
    ARG tmp_dir='/tmp'
    ARG repo_key='http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7'
    ARG repo_src='http://mirrors.163.com/.help/CentOS7-Base-163.repo'
    ARG repo_dst='/etc/yum.repos.d/CentOS-Base.repo'
    ARG tengine_ver='2.3.0'
    ARG tengine_src="http://tengine.taobao.org/download/tengine-${tengine_ver}.tar.gz"
    ARG tengine_dst="tengine-${tengine_ver}.tar.gz"
    ARG tengine_cfg_opts='--prefix=/usr/local/nginx \
                  --with-http_gzip_static_module \
                  --with-http_stub_status_module \
                  --with-http_ssl_module \
                  --with-http_slice_module \
                  --with-pcre'
    ARG depend_rpms='gcc make openssl-devel pcre-devel'
    RUN cd ${tmp_dir} \
        && cp -a ${repo_dst} ${repo_dst}.ori \
        && curl -L ${repo_src} -o ${repo_dst} \
        && curl -L ${tengine_src} -o ${tengine_dst} \
        && rpm --import ${repo_key} \
        && yum -y update --downloadonly --downloaddir=. \
        && yum -y install ${depend_rpms} --downloadonly --downloaddir=. \
        && yum -y install ./*.rpm \
        && useradd www -s /sbin/nologin \
        && tar axf ${tengine_dst} \
        && cd tengine-${tengine_ver} \
        && ./configure ${tengine_cfg_opts} \
        && make \
        && make install \
        && cd \
        && yum -y remove gcc make cpp \
        && yum clean all \
        && rm -rf ${tmp_dir}/*
    EXPOSE 80/tcp 443/tcp
    ENV PATH ${PATH}:/usr/local/nginx/sbin
    CMD nginx -g "daemon off;"

    编辑服务编排配置文件:

    docker中的yaml格式配置文件通常以yml或yaml作为后缀(惯例,非强制)。

    本例定义了2个服务,名称分别为webapp与proxy:

    webapp服务运行centos:latest镜像(image),挂载数据卷/目录(volumes),并指定环境变量(environment),工作目录(working_dir),容器内运行的命令(command),以及重启策略(restart)为命令运行失败时(on-failure)。

    proxy服务运行tengine_nginx:2.3.0镜像,依赖于webapp服务内的容器启动(depends_on),并将容器的80端口开放为外部的80端口(ports)。

    配置文件内可以通过顶级的networks指令设置网络相关的参数,未指定则按默认设置。对于连接至同一网络驱动下的所有容器,相互之间开放所有端口,本例中,tomcat默认的8080端口对nginx开放,因此端口的对外映射可选(expose/ports)。

    [root@docker_host_0 opt]# vi tomcat-with-nginx-compose.yml
    version: '3.7'
    
    services:
      webapp:
        image: centos:latest
        volumes:
          - /opt/jdks/jdk1.8.0_212:/opt/jdks/jdk1.8.0_212:ro
          - /opt/apps/app_0/source:/opt/apps/app_0
        environment:
          JAVA_HOME: /opt/jdks/jdk1.8.0_212
        working_dir: /opt/apps/app_0
        command: bin/catalina.sh run
        restart: on-failure
    
      proxy:
        build:
          context: .
          dockerfile: dockerfile-for-nginx
        depends_on:
          - webapp
        image: tengine_nginx:2.3.0
        volumes:
            - /opt/apps/app_0/nginx/conf:/usr/local/nginx/conf:ro
            - /opt/apps/app_0/nginx/logs:/usr/local/nginx/logs
        restart: on-failure
        ports:
          - '80:80/tcp'

    检查编排配置文件:

    docker-compose config命令用于检查配置文件的语法与指令,并输出配置文件的所有内容,若指定-q参数则仅执行检查而不输出。

    docker-compose默认的配置文件名称为docker-compose.yml或docker-compose.yaml,-f参数用于指定自定义的配置文件。

    [root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml config -q
    [root@docker_host_0 opt]#

    编译镜像:

    docker-compose build命令根据配置文件中 services.服务名.build 定义的参数编译镜像,若配置文件中未指定build,则不执行该步骤。

    [root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml build
    ...
    [root@docker_host_0 opt]# 
    [root@docker_host_0 opt]# docker image ls
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    tengine_nginx       2.3.0               9404e1b71b70        32 seconds ago      340MB
    centos              latest              9f38484d220f        2 months ago        202MB
    [root@docker_host_0 opt]#

    以非守护进程模式(nginx -g "daemon off;")运行nginx 60秒,复制nginx所需文件:

    [root@docker_host_0 opt]# docker run -dit --rm --name t_nginx tengine_nginx:2.3.0 bash -c 'timeout 60 nginx -g "daemon off;"'
    3cc8de88de3fe295657fde08552165e69514c368689e2078ec89771e23cb16e8
    [root@docker_host_0 opt]# 
    [root@docker_host_0 opt]# docker container ls -a
    CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES
    3cc8de88de3f        tengine_nginx:2.3.0   "bash -c 'timeout 60…"   7 seconds ago       Up 6 seconds        80/tcp, 443/tcp     t_nginx
    [root@docker_host_0 opt]# 
    [root@docker_host_0 opt]# docker exec -it t_nginx ls -l /usr/local/nginx
    total 0
    drwx------ 2 nobody root   6 May 30 23:39 client_body_temp
    drwxr-xr-x 2 root   root 333 May 30 23:37 conf
    drwx------ 2 nobody root   6 May 30 23:39 fastcgi_temp
    drwxr-xr-x 2 root   root  40 May 30 23:37 html
    drwxr-xr-x 1 root   root  58 May 30 23:39 logs
    drwx------ 2 nobody root   6 May 30 23:39 proxy_temp
    drwxr-xr-x 2 root   root  19 May 30 23:37 sbin
    drwx------ 2 nobody root   6 May 30 23:39 scgi_temp
    drwx------ 2 nobody root   6 May 30 23:39 uwsgi_temp
    [root@docker_host_0 opt]# 
    [root@docker_host_0 opt]# docker cp t_nginx:/usr/local/nginx/ /opt/apps/app_0/
    [root@docker_host_0 opt]# 
    [root@docker_host_0 opt]# ll /opt/apps/app_0/nginx/
    total 0
    drwx------ 2 root root   6 May 30 23:39 client_body_temp
    drwxr-xr-x 2 root root 333 May 30 23:37 conf
    drwx------ 2 root root   6 May 30 23:39 fastcgi_temp
    drwxr-xr-x 2 root root  40 May 30 23:37 html
    drwxr-xr-x 2 root root  58 May 30 23:39 logs
    drwx------ 2 root root   6 May 30 23:39 proxy_temp
    drwxr-xr-x 2 root root  19 May 30 23:37 sbin
    drwx------ 2 root root   6 May 30 23:39 scgi_temp
    drwx------ 2 root root   6 May 30 23:39 uwsgi_temp
    [root@docker_host_0 opt]#

    编辑nginx配置文件:

    docker内部实现了服务发现功能,对连接至同一网络驱动下的容器提供名称解析功能,本例中,webapp服务成功启动后,可以被proxy服务的nginx识别。

    user www www;
    worker_processes auto;
    pid logs/nginx.pid;
    error_log logs/error.log warn;
    worker_rlimit_nofile 51200;
    
    events {
    	use epoll;
    	worker_connections 4096;
    }
    
    http {
    	include mime.types;
    	default_type application/octet-stream;
    	server_names_hash_bucket_size 128;
    	client_header_buffer_size 16k;
    	large_client_header_buffers 4 32k;
    	client_max_body_size 8m;
    	access_log off;
    	sendfile on;
    	tcp_nopush on;
    	tcp_nodelay on;
    	keepalive_timeout 30;
    	proxy_cache_methods POST GET HEAD;
    	open_file_cache max=655350 inactive=20s;
    	open_file_cache_valid 30s;
    	open_file_cache_min_uses 2;
    
    	gzip on;
    	gzip_min_length 1k;
    	gzip_buffers 8 8k;
    	gzip_http_version 1.0;
    	gzip_comp_level 4;
    	gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php;
    	gzip_vary on;
    	server_tokens off;
    
    	log_format main	'$remote_addr\t$upstream_addr\t[$time_local]\t$request\t'
    					'$status\t$body_bytes_sent\t$http_user_agent\t$http_referer\t'
                        '$http_x_forwarded_for\t$request_time\t$upstream_response_time\t$remote_user\t'
    					'$request_body';
    
    	map $http_upgrade $connection_upgrade {
    		default upgrade;
    		'' close;
    	}
    
    	upstream tomcat-app-0 {
    		server webapp:8080;
    	}
    
    	server {
    		listen 80;
    		server_name 127.0.0.1;
    		charset utf-8;
    		client_max_body_size 75M;
    
    		location / {
    			proxy_pass http://tomcat-app-0;
    		}
    
    		access_log logs/webapp-access.log main;
    	}
    }

    测试nginx配置文件:

    [root@docker_host_0 opt]# docker run -it --rm --mount type=bind,src=/opt/apps/app_0/nginx/conf,dst=/usr/local/nginx/conf,ro --add-host webapp:127.0.0.1 tengine_nginx:2.3.0 bash -c 'nginx -t'
    nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
    nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
    [root@docker_host_0 opt]#

    启动服务:

    docker-compose up命令用于启动服务:

    默认启动配置文件内定义的所有服务,可以显式指定服务名,以启动特定的服务。

    若配置文件中指定的镜像名称不存在,则默认首先执行编译(build)。

    -d/--detach用于指定容器后台运行,等同于docker run命令的-d/--detach选项。

    --scale用于指定相应服务的容器数量,格式为 服务名=数量

    [root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml up -d --scale webapp=3
    Creating network "opt_default" with the default driver
    Creating opt_webapp_1 ... done
    Creating opt_webapp_2 ... done
    Creating opt_webapp_3 ... done
    Creating opt_proxy_1  ... done
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# docker container ls -a
    CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                         NAMES
    6b55fe98a99c        tengine_nginx:2.3.0   "/bin/sh -c 'nginx -…"   10 seconds ago      Up 9 seconds        0.0.0.0:80->80/tcp, 443/tcp   opt_proxy_1
    0617d640c60a        centos:latest         "bin/catalina.sh run"    11 seconds ago      Up 9 seconds                                      opt_webapp_2
    c85f2de181cd        centos:latest         "bin/catalina.sh run"    11 seconds ago      Up 10 seconds                                     opt_webapp_3
    2517e03f11c9        centos:latest         "bin/catalina.sh run"    11 seconds ago      Up 10 seconds                                     opt_webapp_1
    [root@docker_host_0 opt]#

    docker-compose默认创建bridge模式的网络:

    [root@docker_host_0 opt]# docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    cb90714e47b3        bridge              bridge              local
    a019d8b63640        host                host                local
    bb7095896ade        none                null                local
    80ce8533b964        opt_default         bridge              local
    [root@docker_host_0 opt]#

    查看容器内的应用程序运行情况:

    docker-compose up可以指定服务名称,以查看特定的服务进程。

    [root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml top
    opt_proxy_1
    UID     PID    PPID    C   STIME   TTY     TIME                        CMD
    ----------------------------------------------------------------------------------------------
    root   13674   13657   0   00:28   ?     00:00:00   nginx: master process nginx -g daemon off;
    1000   13738   13674   0   00:28   ?     00:00:00   nginx: worker process
    1000   13739   13674   0   00:28   ?     00:00:00   nginx: worker process
    
    opt_webapp_1
    UID     PID    PPID    C   STIME   TTY     TIME                          CMD
    -------------------------------------------------------------------------------------------------
    root   13367   13342   1   00:28   ?     00:00:02   /opt/jdks/jdk1.8.0_212/bin/java -Djava.util.l
                                                        ogging.config.file=/opt/apps/app_0/conf/loggi
                                                        ng.properties -Djava.util.logging.manager=org
                                                        .apache.juli.ClassLoaderLogManager
                                                        -Djdk.tls.ephemeralDHKeySize=2048 -Djava.prot
                                                        ocol.handler.pkgs=org.apache.catalina.webreso
                                                        urces -Dorg.apache.catalina.security.Security
                                                        Listener.UMASK=0027 -Dignore.endorsed.dirs=
                                                        -classpath /opt/apps/app_0/bin/bootstrap.jar:
                                                        /opt/apps/app_0/bin/tomcat-juli.jar
                                                        -Dcatalina.base=/opt/apps/app_0
                                                        -Dcatalina.home=/opt/apps/app_0
                                                        -Djava.io.tmpdir=/opt/apps/app_0/temp
                                                        org.apache.catalina.startup.Bootstrap start
    
    opt_webapp_2
    UID     PID    PPID    C   STIME   TTY     TIME                          CMD
    -------------------------------------------------------------------------------------------------
    root   13436   13388   1   00:28   ?     00:00:02   /opt/jdks/jdk1.8.0_212/bin/java -Djava.util.l
                                                        ogging.config.file=/opt/apps/app_0/conf/loggi
                                                        ng.properties -Djava.util.logging.manager=org
                                                        .apache.juli.ClassLoaderLogManager
                                                        -Djdk.tls.ephemeralDHKeySize=2048 -Djava.prot
                                                        ocol.handler.pkgs=org.apache.catalina.webreso
                                                        urces -Dorg.apache.catalina.security.Security
                                                        Listener.UMASK=0027 -Dignore.endorsed.dirs=
                                                        -classpath /opt/apps/app_0/bin/bootstrap.jar:
                                                        /opt/apps/app_0/bin/tomcat-juli.jar
                                                        -Dcatalina.base=/opt/apps/app_0
                                                        -Dcatalina.home=/opt/apps/app_0
                                                        -Djava.io.tmpdir=/opt/apps/app_0/temp
                                                        org.apache.catalina.startup.Bootstrap start
    
    opt_webapp_3
    UID     PID    PPID    C   STIME   TTY     TIME                          CMD
    -------------------------------------------------------------------------------------------------
    root   13425   13397   1   00:28   ?     00:00:02   /opt/jdks/jdk1.8.0_212/bin/java -Djava.util.l
                                                        ogging.config.file=/opt/apps/app_0/conf/loggi
                                                        ng.properties -Djava.util.logging.manager=org
                                                        .apache.juli.ClassLoaderLogManager
                                                        -Djdk.tls.ephemeralDHKeySize=2048 -Djava.prot
                                                        ocol.handler.pkgs=org.apache.catalina.webreso
                                                        urces -Dorg.apache.catalina.security.Security
                                                        Listener.UMASK=0027 -Dignore.endorsed.dirs=
                                                        -classpath /opt/apps/app_0/bin/bootstrap.jar:
                                                        /opt/apps/app_0/bin/tomcat-juli.jar
                                                        -Dcatalina.base=/opt/apps/app_0
                                                        -Dcatalina.home=/opt/apps/app_0
                                                        -Djava.io.tmpdir=/opt/apps/app_0/temp
                                                        org.apache.catalina.startup.Bootstrap start
    [root@docker_host_0 opt]#

    访问web服务,请求被调度至服务内的每个容器:

    [root@docker_host_0 opt]# ss -atn | grep 80
    LISTEN     0      128         :::80                      :::*
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# for i in $(seq 6); do curl -s 127.0.0.1 -o /dev/null; done
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# cat /opt/apps/app_0/source/logs/localhost_access_log.$(date +%F).txt
    172.20.0.3:80 172.20.0.5:42430 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184
    172.20.0.3:80 172.20.0.5:42436 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184
    172.20.0.4:80 172.20.0.5:45098 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184
    172.20.0.4:80 172.20.0.5:45122 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184
    172.20.0.2:80 172.20.0.5:59294 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184
    172.20.0.2:80 172.20.0.5:59306 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184
    [root@docker_host_0 opt]#

    扩充服务内的容器数量:

    对同一服务运行docker-compose up命令,--scale用于在现有基础上动态增加或减小容器数量。

    [root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml up -d --scale webapp=6
    Starting opt_webapp_1 ... done
    Starting opt_webapp_2 ... done
    Starting opt_webapp_3 ... done
    Creating opt_webapp_4 ... done
    Creating opt_webapp_5 ... done
    Creating opt_webapp_6 ... done
    opt_proxy_1 is up-to-date
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# docker container ls -a
    CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                         NAMES
    b9fc74985a13        centos:latest         "bin/catalina.sh run"    9 seconds ago       Up 7 seconds                                      opt_webapp_4
    29e9837c7b4d        centos:latest         "bin/catalina.sh run"    9 seconds ago       Up 7 seconds                                      opt_webapp_5
    5e0a0611bb2f        centos:latest         "bin/catalina.sh run"    9 seconds ago       Up 8 seconds                                      opt_webapp_6
    6b55fe98a99c        tengine_nginx:2.3.0   "/bin/sh -c 'nginx -…"   3 minutes ago       Up 3 minutes        0.0.0.0:80->80/tcp, 443/tcp   opt_proxy_1
    0617d640c60a        centos:latest         "bin/catalina.sh run"    3 minutes ago       Up 3 minutes                                      opt_webapp_2
    c85f2de181cd        centos:latest         "bin/catalina.sh run"    3 minutes ago       Up 3 minutes                                      opt_webapp_3
    2517e03f11c9        centos:latest         "bin/catalina.sh run"    3 minutes ago       Up 3 minutes                                      opt_webapp_1
    [root@docker_host_0 opt]#

    移除服务:

    docker-compose down命令用于移除服务,包括停止与移除与服务相关联的容器与网络,另可指定--rmi与-v/--volumes选项移除相关联的数据卷与镜像。

    [root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml down
    Stopping opt_webapp_4 ... done
    Stopping opt_webapp_5 ... done
    Stopping opt_webapp_6 ... done
    Stopping opt_proxy_1  ... done
    Stopping opt_webapp_2 ... done
    Stopping opt_webapp_3 ... done
    Stopping opt_webapp_1 ... done
    Removing opt_webapp_4 ... done
    Removing opt_webapp_5 ... done
    Removing opt_webapp_6 ... done
    Removing opt_proxy_1  ... done
    Removing opt_webapp_2 ... done
    Removing opt_webapp_3 ... done
    Removing opt_webapp_1 ... done
    Removing network opt_default
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# docker container ls -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]#
    [root@docker_host_0 opt]# ss -atn | grep 80
    [root@docker_host_0 opt]#

stack方式

宿主机docker_host_0创建群集,docker_host_1以管理角色加入群集:

[root@docker_host_0 opt]# docker swarm init
Swarm initialized: current node (u9siv3gxc4px3xa85t5tybv68) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5icsimlouv1ppt09fxovvlvn9pp3prevlu2vus6wvtdilv6w86-3y28uwlmc5hcb61hw42oxe4j2 192.168.9.168:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@docker_host_0 opt]#
[root@docker_host_0 opt]# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5icsimlouv1ppt09fxovvlvn9pp3prevlu2vus6wvtdilv6w86-elvhukieu148f22dmimq914ki 192.168.9.168:2377

[root@docker_host_0 opt]#
[root@docker_host_1 ~]# ip addr show eth0 | sed -n '/inet /p' | awk '{print $2}'
192.168.9.169/24
[root@docker_host_1 ~]#
[root@docker_host_1 ~]# uname -r
3.10.0-957.12.2.el7.x86_64
[root@docker_host_1 ~]#
[root@docker_host_1 ~]# docker -v
Docker version 18.09.6, build 481bc77156
[root@docker_host_1 ~]#
[root@docker_host_1 ~]# docker swarm join --token SWMTKN-1-5icsimlouv1ppt09fxovvlvn9pp3prevlu2vus6wvtdilv6w86-elvhukieu148f22dmimq914ki 192.168.9.168:2377
This node joined a swarm as a manager.
[root@docker_host_1 ~]#
[root@docker_host_1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
u9siv3gxc4px3xa85t5tybv68     docker_host_0       Ready               Active              Leader              18.09.6
qhgpqw9n5wwow1zfzji69eac0 *   docker_host_1       Ready               Active              Reachable           18.09.6
[root@docker_host_1 ~]#

在docker_host_0节点上导出tengine_nginx:2.3.0镜像,并传输至docker_host_1节点:

[root@docker_host_0 opt]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
tengine_nginx       2.3.0               9404e1b71b70        About an hour ago   340MB
centos              latest              9f38484d220f        2 months ago        202MB
[root@docker_host_0 opt]#
[root@docker_host_0 opt]# docker image save tengine_nginx:2.3.0 -o nginx.tar
[root@docker_host_0 opt]#
[root@docker_host_0 opt]# ll -h nginx.tar
-rw------- 1 root root 338M May 31 00:56 nginx.tar
[root@docker_host_0 opt]#
[root@docker_host_0 opt]# scp -P 9999 nginx.tar root@192.168.9.169:/opt
nginx.tar                                                      100%  337MB  83.1MB/s   00:04
[root@docker_host_0 opt]#

在docker_host_1节点上导入tengine_nginx:2.3.0镜像,并设置与docker_host_0节点相同的挂载源路径:

[root@docker_host_1 ~]# cd /opt/
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# ls
apache-tomcat-8.5.40.tar.gz  containerd  jdk-8u212-linux-x64.tar.gz  nginx.tar
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# docker image load -i nginx.tar
d69483a6face: Loading layer  209.5MB/209.5MB
717661697400: Loading layer  144.3MB/144.3MB
Loaded image: tengine_nginx:2.3.0
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# docker image ls -a
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
tengine_nginx       2.3.0               9404e1b71b70        About an hour ago   340MB
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# mkdir -p /opt/{apps/app_0/source,jdks}
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# tar axf apache-tomcat-8.5.40.tar.gz --strip-components=1 -C apps/app_0/source/
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# sed -i 's/pattern="%h %l %u %t/pattern="%A:%{local}p %a:%{remote}p %t/' apps/app_0/source/conf/server.xml
[root@docker_host_1 opt]#
[root@docker_host_1 opt]# tar axf jdk-8u212-linux-x64.tar.gz -C jdks/
[root@docker_host_1 opt]#

在docker_host_0节点上编辑服务编排配置文件:

volumes与port是使用长格式指定挂载点与端口。

services.服务名.deploy 指定服务的运行模式(mode),副本数量(replicas),重启策略(restart_policy),服务所在节点(placement)。

[root@docker_host_0 opt]# vi tomcat-with-nginx-stack.yml
version: "3.7"

services:
  webapp:
    image: centos:latest
    volumes:
      - type: bind
        source: /opt/jdks/jdk1.8.0_212
        target: /opt/jdks/jdk1.8.0_212
        read_only: true
      - type: bind
        source: /opt/apps/app_0/source
        target: /opt/apps/app_0
    environment:
      JAVA_HOME: /opt/jdks/jdk1.8.0_212
    working_dir: /opt/apps/app_0
    command: bin/catalina.sh run
    deploy:
      mode: replicated
      replicas: 3
      restart_policy:
        condition: on-failure

  proxy:
    image: tengine_nginx:2.3.0
    volumes:
      - type: bind
        source: /opt/apps/app_0/nginx/conf
        target: /usr/local/nginx/conf
        read_only: true
      - type: bind
        source: /opt/apps/app_0/nginx/logs
        target: /usr/local/nginx/logs
    deploy:
      placement:
        constraints:
          - node.hostname == docker_host_0
      mode: global
      restart_policy:
        condition: on-failure
    ports:
      - target: 80
        published: 80
        protocol: tcp
        mode: ingress

部署服务,名称为web-cluster:

[root@docker_host_0 opt]# docker stack deploy -c tomcat-with-nginx-stack.yml web-cluster
Creating network web-cluster_default
Creating service web-cluster_proxy
Creating service web-cluster_webapp
[root@docker_host_0 opt]#

副本服务webapp被分配至2个节点,全局服务proxy按约束条件(constraints)被分配至docker_host_0节点:

[root@docker_host_0 opt]# docker service ls
ID                  NAME                 MODE                REPLICAS            IMAGE                 PORTS
onshpipwwcmd        web-cluster_proxy    global              1/1                 tengine_nginx:2.3.0   *:80->80/tcp
pvj1cyvutjc5        web-cluster_webapp   replicated          3/3                 centos:latest
[root@docker_host_0 opt]#
[root@docker_host_0 opt]# docker service ps web-cluster_webapp
ID                  NAME                   IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
rp9xuqzbns3p        web-cluster_webapp.1   centos:latest       docker_host_1       Running             Running 12 seconds ago
479ea4e8q8k1        web-cluster_webapp.2   centos:latest       docker_host_1       Running             Running 12 seconds ago
nlr7lc6g7m4v        web-cluster_webapp.3   centos:latest       docker_host_0       Running             Running 13 seconds ago
[root@docker_host_0 opt]#
[root@docker_host_0 opt]# docker service ps web-cluster_proxy
ID                  NAME                                              IMAGE                 NODE                DESIRED STATE       CURRENT STATE            ERROR                       PORTS
ma8wr6kn8vyf        web-cluster_proxy.u9siv3gxc4px3xa85t5tybv68       tengine_nginx:2.3.0   docker_host_0       Running             Running 10 seconds ago
j6w9au6v8tzt         \_ web-cluster_proxy.u9siv3gxc4px3xa85t5tybv68   tengine_nginx:2.3.0   docker_host_0       Shutdown            Failed 15 seconds ago    "task: non-zero exit (1)"
[root@docker_host_0 opt]#

stack默认使用swarm模式初始创建的overlay网络:

[root@docker_host_0 opt]# docker network ls
NETWORK ID          NAME                  DRIVER              SCOPE
cb90714e47b3        bridge                bridge              local
dfe3ba6e0df5        docker_gwbridge       bridge              local
a019d8b63640        host                  host                local
mxcmpb9uzjy2        ingress               overlay             swarm
bb7095896ade        none                  null                local
qn3rp2t93lli        web-cluster_default   overlay             swarm
[root@docker_host_0 opt]#

通过docker_host_0与docker_host_1节点均可访问被代理的web服务:

[root@docker_host_1 opt]# ss -atn | grep 80
LISTEN     0      128         :::80                      :::*
[root@docker_host_0 opt]# ss -atn | grep 80
LISTEN     0      128         :::80                      :::*
[root@docker_host_0 opt]#
[root@docker_host_0 opt]# curl -I -o /dev/null -s -w %{http_code} 127.0.0.1
200
[root@docker_host_0 opt]# curl -I -o /dev/null -s -w %{http_code} 192.168.9.168
200
root@docker_host_0 opt]# curl -I -o /dev/null -s -w %{http_code} 192.168.9.169
200
[root@docker_host_0 opt]#

    depends_on指令用于指定镜像的构建顺序,以及容器的启动与停止顺序,若需要解决容器内应用程序间的依赖关系,则需手动实现容器内命令的慢启动,或借助于诸如 dockerize 之类的第三方工具。

区别与联系

stack与compose均通过yaml或json格式的配置文件进行服务的编排,区别主要包括:

  • 某些指令在两种方式下不兼容,如build,deploy,depends_on,restart_policy等。
  • stack由 docker 引擎内置,须开启swarm模式,组成同一服务的多个容器可能跨越多个宿主机,因此要求相应的镜像必须存在于宿主机本地或可访问的仓库中;compose需要额外安装,无须开启swarm模式,所有容器均位于当前单个宿主机。
  • stack仅可根据预先已编译的镜像部署服务;compose支持镜像的编译与服务的部署,二者可同时执行,或单独执行。

参考


以上所述就是小编给大家介绍的《docker CE on Linux示例浅析(五)服务编排 原 荐》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

暗网

暗网

杰米·巴特利特 / 刘丹丹 / 北京时代华文书局 / 2018-7 / 59.00

全面深入揭秘“黑暗版淘宝”暗网的幕后世界和操纵者 现实中所有的罪恶,在暗网中,都是明码标价的商品。 暗杀、色情、恋童癖、比特币犯罪、毒品交易…… TED演讲、谷歌特邀专家、英国智库网络专家杰米•巴特利特代表作! 1、 被大家戏称为“黑暗版淘宝”的暗网究竟是什么?微信猎奇 文不能告诉你的真相都在这里了! 2、 因章莹颖一案、Facebook信息泄露危机而被国人所知的暗网......一起来看看 《暗网》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

随机密码生成器
随机密码生成器

多种字符组合密码

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码