内容简介:In this tutorial, you will learn how to dockerize a Ruby on Rails application. The application we’re going to build will make use of PostgreSQL, Redis, and Sidekiq.We’ll also be using Unicorn and Nginx in both development and production. If you would prefe
Introduction
In this tutorial, you will learn how to dockerize a Ruby on Rails application. The application we’re going to build will make use of PostgreSQL, Redis, and Sidekiq.
We’ll also be using Unicorn and Nginx in both development and production. If you would prefer to use Puma or something else, this shouldn’t be an issue.
After reading this article:
- You will have a basic idea of what Docker is.
- How Docker can help you to streamline development.
- How you can use Continuous Integration and Delivery (CI/CD) to build and test your Ruby projects.
You can find the complete code for this tutorial in this repository .
What is Docker?
Docker allows you to package up an application or service with all of its dependencies into a standardized unit. This unit is typically labeled as a Docker image.
Everything the application needs to run is included. The Docker image contains the code, runtime, system libraries and anything else you would install on a server to make it run if you weren’t using Docker.
What Makes Docker Different from a Virtual Machine
You may have used Vagrant, VirtualBox, or VMWare to run a virtual machine. They allow you to isolate services, but there are a few major differences that make virtual machines much less efficient.
For starters, you need to have an entire guest operating system for each application you want to isolate. It also takes many seconds to boot-up a virtual machine, and each VM can potentially be gigabytes in size.
Docker containers share your host’s kernel, and isolation is done using cgroups and other linux kernel libraries. Docker is very lightweight—it typically takes a few milliseconds for a container to start, and running a container doesn’t use much disk space at all.
What’s the Bottom Line?
What ifyou could develop your Rails application in isolation on your work station without using RVM or chruby , and changing Ruby versions were super easy?
What ifas a consultant or freelancer with 10 Rails projects, you had everything you needed isolated for each project without needing to waste precious SSD disk space?
What ifyou could spin up your Rails, PostgreSQL, Redis, and Sidekiq stack in about 3 seconds?
What ifyou wanted to share your project on GitHub and other developers only had to run a single command to get everything running in minutes?
All of this and much more is possible thanks to Docker.
The Benefits of Using Docker
If you’re constantly looking for ways to improve your productivity and make the overall software development experience better, you’ll appreciate the following 5 key benefits Docker offers:
1. Cross Environment Consistency
Docker allows you to encapsulate your application in such a way that you can easily move it between environments. It will work properly in all environments and on all machines capable of running Docker.
2. Expand Your Development Team Painlessly
You should not have to hand over a 30 page document to a new developer to teach them how to set up your application so they can run it locally. This process can take all day or longer, and the new developer is bound to make mistakes.
With Docker all developers in your team can get your multi-service application running on their workstation in an automated, repeatable, and efficient way. You just run a few commands, and minutes later it all works.
3. Use Whatever Technology Fits Best
If you’re a startup or a shop that uses only one language, you could be putting yourself at a disadvantage. Since you can isolate an application in a Docker container, it becomes possible to broaden your horizons as a developer by experimenting with new languages and frameworks.
You no longer have to worry about other developers having to set up your technology of choice. You can hand them a Docker image and tell them to run it.
4. Build Your Image Once and Deploy It Many Times
Since your applications are inside of a pre-built Docker image, they can be started in milliseconds. This makes it very easy to scale up and down.
Time-consuming tasks such as installing dependencies only need to be run once at build time. Once the image has been built, you can move it around to many hosts.
This not only helps with scaling up and down quickly, but it also makes your deployments more predictable and resilient.
5. Developers and Operation Managers Can Work Together
Docker’s toolset allows developers and operation managers to work together towards the common goal of deploying an application.
Docker acts as an abstraction. You can distribute an application, and members of another team do not need to know how to configure or set up its environment.
It also becomes simple to distribute your Docker images publicly or privately. You can keep tabs of what changed when new versions were pushed and more.
Prerequisites
You will need to install Docker. Docker can be run on most major Linux distributions, and there are tools to let you run it on OSX and Windows too.
This tutorial focuses on Linux users, but it will include comments when things need to be adjusted for OSX or Windows.
Installing Docker
Follow one of the installation guides below for your operating system:
- Linux: https://docs.docker.com/get-started/
- Windows and Mac: https://www.docker.com/products/docker-desktop
Before proceeding, you should have Docker installed and you need to have completed at least the hello world example included in one of the installation guides above.
The Rails Application
The application we’re going to build will be for the latest version of Rails 6 which happens to be 6.0.2
at the time of writing.
Create a Repository
Create a new GitHub repository to host your code:
- Follow the instructions to create a repo .
- Set the language to Rails:
- Create the repository.
- Clone it to your machine:
$ git clone YOUR_REPOSITORY_URL
Generating a New Rails Application
We’re going to generate a new Rails project without even needing Ruby installed on our work station. We can do this by using the official Ruby Docker image.
Creating a Rails Image
We’ll install Rails on a Docker container. For that, we’ll need a Dockerfile . A Dockerfile contains all the commands that you would need to install the programs and libraries. It uses a special syntax that is easy to read.
Create a file called Dockerfile.rails
:
# Dockerfile.rails FROM ruby:2.7 MAINTAINER maintainer@example.com ARG USER_ID ARG GROUP_ID RUN addgroup --gid $GROUP_ID user RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user ENV INSTALL_PATH /opt/app RUN mkdir -p $INSTALL_PATH RUN gem install rails bundler RUN chown -R user:user /opt/app WORKDIR /opt/app USER $USER_ID CMD ["/bin/sh"]
The basic Dockerfile commands are:
- FROM : defines what image to start from. We’ll use the official Ruby image as a starting point.
- ARG : specifies build-time argument variables. If your workstation is running Linux, the user and group ids should match between the host and the docker container.
- RUN : executes commands inside the container. In the example, we use it to create a user and group and then to install the Rails gems.
- ENV : defines environment variables.
- WORKDIR : changes the current directory inside the container.
- USER : changes the active user inside the container.
- CMD : defines the program to run when the container starts.
To build the image:
$ docker build -t rails-toolbox \ --build-arg USER_ID=$(id -u) \ --build-arg GROUP_ID=$(id -g) \ -f Dockerfile.rails .
Creating the Project
We’ll use the new Rails image to create our project:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" $ docker run -it \ -v $PWD:/opt/app \ rails-toolbox rails new --skip-bundle drkiq
Docker run starts a new container and runs a program inside:
-it -v $PWD:/opt/app rails new --skip-bundle drkiq
After running the command, you should find a new directory with Ruby files. It has a brand new Rails project.
Rails new creates a new git repository, but since we already have one at the top level of the project, we won’t need it, you can delete it:
$ rm -rf drkiq/.git
Setting Up a Strong Base
Before we start adding Docker-specific files to the project, let’s add a few gems to our Gemfile
and make a few adjustments to our application to make it production-ready.
Modifying the Gemfile
Add the following lines to the bottom of your Gemfile
:
gem 'unicorn', '~> 5.5.2' gem 'pg', '~> 1.2.2' gem 'sidekiq', '~> 6.0.4' gem 'redis-rails', '~> 5.0.2'
DRYing Out the Database Configuration
Change your config/database.yml
to look like this:
--- development: url: <%= ENV['DATABASE_URL'].gsub('?', '_development?') %> test: url: <%= ENV['DATABASE_URL'].gsub('?', '_test?') %> staging: url: <%= ENV['DATABASE_URL'].gsub('?', '_staging?') %> production: url: <%= ENV['DATABASE_URL'].gsub('?', '_production?') %>
We will be using environment variables to configure our application. The above file allows us to use the DATABASE_URL
, while also allowing us to name our databases based on the environment in which they are being run.
DRYing Out the Secrets File
Create a config/secrets.yml
file, it should look like this:
development: &default secret_key_base: <%= ENV['SECRET_TOKEN'] %> test: <<: *default staging: <<: *default production: <<: *default
YAML is a markup language. If you’ve never seen this syntax before, it involves setting each environment to use the same SECRET_TOKEN
environment variable.
This is fine since the value will be different in each environment.
Editing the Application Configuration
Add the following lines to your config/application.rb
:
# ... module Drkiq class Application < Rails::Application # We want to set up a custom logger which logs to STDOUT. # Docker expects your application to log to STDOUT/STDERR and to be ran # in the foreground. config.log_level = :debug config.log_tags = [:subdomain, :uuid] config.logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT)) # Since we're using Redis for Sidekiq, we might as well use Redis to back # our cache store. This keeps our application stateless as well. config.cache_store = :redis_store, ENV['CACHE_URL'], { namespace: 'drkiq::cache' } # If you've never dealt with background workers before, this is the Rails # way to use them through Active Job. We just need to tell it to use Sidekiq. config.active_job.queue_adapter = :sidekiq # ... end end
Creating the Unicorn Config
Next, create the config/unicorn.rb
file and add the following content to it:
# Heavily inspired by GitLab: # https://github.com/gitlabhq/gitlabhq/blob/master/config/unicorn.rb.example # Go with at least 1 per CPU core, a higher amount will usually help for fast # responses such as reading from a cache. worker_processes ENV['WORKER_PROCESSES'].to_i # Listen on a tcp port or unix socket. listen ENV['LISTEN_ON'] # Use a shorter timeout instead of the 60s default. If you are handling large # uploads you may want to increase this. timeout 30 # Combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings: # http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow preload_app true GC.respond_to?(:copy_on_write_friendly=) && GC.copy_on_write_friendly = true # Enable this flag to have unicorn test client connections by writing the # beginning of the HTTP headers before calling the application. This # prevents calling the application for connections that have disconnected # while queued. This is only guaranteed to detect clients on the same # host unicorn runs on, and unlikely to detect disconnects even on a # fast LAN. check_client_connection false before_fork do |server, worker| # Don't bother having the master process hang onto older connections. defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect! # The following is only recommended for memory/DB-constrained # installations. It is not needed if your system can house # twice as many worker_processes as you have configured. # # This allows a new master process to incrementally # phase out the old master process with SIGTTOU to avoid a # thundering herd (especially in the "preload_app false" case) # when doing a transparent upgrade. The last worker spawned # will then kill off the old master process with a SIGQUIT. old_pid = "#{server.config[:pid]}.oldbin" if old_pid != server.pid begin sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU Process.kill(sig, File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end # Throttle the master from forking too quickly by sleeping. Due # to the implementation of standard Unix signal handlers, this # helps (but does not completely) prevent identical, repeated signals # from being lost when the receiving process is busy. # sleep 1 end after_fork do |server, worker| # Per-process listener ports for debugging, admin, migrations, etc.. # addr = "127.0.0.1:#{9293 + worker.nr}" # server.listen(addr, tries: -1, delay: 5, tcp_nopush: true) defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection # If preload_app is true, then you may also want to check and # restart any other shared sockets/descriptors such as Memcached, # and Redis. TokyoCabinet file handles are safe to reuse # between any number of forked children (assuming your kernel # correctly implements pread()/pwrite() system calls). end
Creating the Sidekiq Initialize Config
Now you can also create the config/initializers/sidekiq.rb
file and add the following code to it:
sidekiq_config = { url: ENV['JOB_WORKER_URL'] } Sidekiq.configure_server do |config| config.redis = sidekiq_config end Sidekiq.configure_client do |config| config.redis = sidekiq_config end
Whitelist Docker Host
Rails have a security features that blocks access from unknown sources. We want our different docker containers to communicate with each other, so we need to whitelist the drkiq container.
Edit the config/environment/development.rb
file and add the following line:
config.hosts << "drkiq"
Creating the Environment Variable File
Last but not least, you need to create an environment file. Go to the top directory of your project, and create a new file next to your Dockerfile.rails
file:
$ cd .. $ touch env-example
The contents of the example environment are:
# Docker user and group ids # On linux these should match your ids USER_ID=1000 GROUP_ID=1000 # You would typically use rake secret to generate a secure token. It is # critical that you keep this value private in production. SECRET_TOKEN=Wa4Kdu6hMt3tYKm4jb9p4vZUuc7jBVFw # Unicorn is more than capable of spawning multiple workers, and in production # you would want to increase this value but in development you should keep it # set to 1. # # It becomes difficult to properly debug code if there's multiple copies of # your application running via workers and/or threads. WORKER_PROCESSES=1 # This will be the address and port that Unicorn binds to. The only real # reason you would ever change this is if you have another service running # that must be on port 8010. LISTEN_ON=0.0.0.0:8010 # This is how we'll connect to PostgreSQL. It's good practice to keep the # username lined up with your application's name but it's not necessary. # # Since we're dealing with development mode, it's ok to have a weak password # such as yourpassword but in production you'll definitely want a better one. # # Eventually we'll be running everything in Docker containers, and you can set # the host to be equal to postgres thanks to how Docker allows you to link # containers. # # Everything else is standard Rails configuration for a PostgreSQL database. DATABASE_URL=postgresql://drkiq:test_db_password@postgres:5432/drkiq?encoding=utf8&pool=5&timeout=5000 # Both of these values are using the same Redis address but in a real # production environment you may want to separate Sidekiq to its own instance, # which is why they are separated here. # # We'll be using the same Docker link trick for Redis which is how we can # reference the Redis hostname as redis. CACHE_URL=redis://redis:6379/0 JOB_WORKER_URL=redis://redis:6379/0
Copy the example file and customize it to your liking . The SECRET_TOKEN should be a random string. The final .env
file is secret and should never be checked into git:
$ cp env-example .env $ echo ".env" >> .gitignore
The above file allows us to configure the application without having to dive into the application code.
This file would also hold information like mail login credentials or API keys.
Dockerizing Your Rails Application
Create the Dockerfile
file and add the following content to it:
# Dockerfile - Development environment FROM ruby:2.7 MAINTAINER maintainer@example.com ARG USER_ID ARG GROUP_ID RUN addgroup --gid $GROUP_ID user RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user ENV INSTALL_PATH /opt/app RUN mkdir -p $INSTALL_PATH # nodejs RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg -o /root/yarn-pubkey.gpg && apt-key add /root/yarn-pubkey.gpg RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list RUN apt-get update && apt-get install -y --no-install-recommends nodejs yarn # rails RUN gem install rails bundler COPY drkiq/Gemfile Gemfile WORKDIR /opt/app/drkiq RUN bundle install RUN chown -R user:user /opt/app USER $USER_ID VOLUME ["$INSTALL_PATH/public"] CMD bundle exec unicorn -c config/unicorn.rb
The above file creates the Docker image with:
- NodeJS and Yarn
- Rails
- Gems in the Gemfile
The last part of the Dockerfile sets the correct user and file permissions and starts the unicorn HTTP server.
Configuring Ngnix
While unicorn is perfectly capable of serving our application, for better performance and security, it’s recommended to put a real HTTP server in front. An HTTP server configured as a reverse-proxy protects our application from slow clients and speeds up connections thanks to caching.
We’ll use Nginx , a general-purpose HTTP server in our setup.
Create a configuration file for Ngnix, the file is called reverse-proxy.conf
and is at the root directory of your project, next to the other Dockefiles:
# reverse-proxy.conf server { listen 8020; server_name example.org; location / { proxy_pass http://drkiq:8010; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
Create a new file called Dockerfile.nginx
to build our custom Nginx image:
# Dockerfile.nginx FROM nginx:latest COPY reverse-proxy.conf /etc/nginx/conf.d/reverse-proxy.conf EXPOSE 8020 STOPSIGNAL SIGTERM CMD ["nginx", "-g", "daemon off;"]
Creating a dockerignore File
Next, create the .dockerignore
file and add the following content to it:
.git .dockerignore .env
This file is similar to .gitgnore
. It will exclude matching files and folders from being built into your Docker image.
What is Docker Compose?
Docker Compose allows you to run 1 or more Docker containers easily. You can define everything in YAML and commit this file so that other developers can simply run docker-compose up
and have everything running quickly.
Creating the Docker Compose Configuration File
Next, we will create the docker-compose.yml
file and copy the following content into it:
version: "3.7" services: postgres: image: postgres:12.1 environment: POSTGRES_USER: drkiq POSTGRES_PASSWORD: test_db_password ports: - '5432:5432' volumes: - drkiq-postgres:/var/lib/postgresql/data redis: image: redis:5.0.7 ports: - '6379:6379' volumes: - drkiq-redis:/var/lib/redis/data drkiq: build: context: . args: USER_ID: "${USER_ID:-1000}" GROUP_ID: "${GROUP_ID:-1000}" depends_on: - postgres - redis volumes: - type: bind source: ./drkiq target: /opt/app/drkiq ports: - '8010:8010' env_file: - .env sidekiq: build: context: . args: USER_ID: "${USER_ID:-1000}" GROUP_ID: "${GROUP_ID:-1000}" command: bundle exec sidekiq depends_on: - postgres - redis volumes: - type: bind source: ./drkiq target: /opt/app/drkiq env_file: - .env nginx: build: context: . dockerfile: ./Dockerfile.nginx depends_on: - drkiq ports: - '8020:8020' volumes: drkiq-postgres: drkiq-redis:
Everything in the above file is documented on Docker Compose ‘s website. The short version is:
- Postgres and Redis use Docker volumes to manage persistence
- Postgres, Redis and Drkiq all expose a port
- Drkiq and Sidekiq both use volumes to mount in app code for live editing.
- Drkiq and Sidekiq both have links to Postgres and Redis.
- Drkiq and Sidekiq both read in environment variables from
.env
- Sidekiq overwrites the default
CMD
to run Sidekiq instead of Unicorn.
Creating the Volumes
In the docker-compose.yml
file, we’re referencing volumes that do not exist. We can create them by running:
$ docker volume create --name drkiq-postgres $ docker volume create --name drkiq-redis
When data is saved in PostgreSQL or Redis, it is saved to these volumes on your work station. This way, you won’t lose your data when you restart the service because Docker containers are stateless.
Running Everything
Now it’s time to put everything together and start-up our stack by running the following:
$ docker-compose up
The first time this command runs it will take quite a while because it needs to pull down all of the Docker images that our application requires.
This operation is mostly bound by network speed, so your times may vary.
At some point, it’s going to begin building the Rails application. You will eventually see the terminal output, including lines similar to these:
postgres_1 | ... redis_1 | ... drkiq_1 | ... sidekiq_1 | ... nginx_1 | ...
You will notice that the drkiq_1
container threw an error saying the database doesn’t exist. This is a completely normal error to expect when running a Rails application because we haven’t initialized the database yet.
Initialize the Database
Hit CTRL+C
in the terminal to stop everything. If you see any errors, you can safely ignore them.
Run the following commands to initialize the database:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" $ docker-compose run --user "$(id -u):$(id -g)" drkiq rake db:reset $ docker-compose run --user "$(id -u):$(id -g)" drkiq rake db:migrate
The first command should warn you that db/schema.rb
doesn’t exist yet, which is normal. Run the second command to remedy that. It should run successfully.
If you head over to the db
folder in your project, you should notice that there is a schema.rb
file and that it’s owned by your user.
You may also have noticed that running either of the commands above also started Redis and PostgreSQL automatically. This is because we have them defined as links. docker-compose
is smart enough to start dependencies.
Running Everything, Round 2
Now that our database is initialized, try running the following:
$ docker-compose up
Testing It Out
Head over to http://localhost:8020
You should be greeted with the typical Rails introduction page.
Working with the Rails Application
Now that we’ve Dockerized our application, let’s start adding features to it to exercise the commands you’ll need to run to interact with your Rails application.
Right now the source code is on your work station, and that source code is being mounted into the Docker container in real-time through a volume.
This means that if you were to edit a file, the changes would take effect instantly, but right now we have no routes or any CSS defined to test this.
Generating a Controller
Run the following command to generate a Pages
controller with a home
action:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker-compose run --user "$(id -u):$(id -g)" drkiq rails g controller Pages home
In a second or two, it should provide everything you would expect when generating a new controller.
This type of command is how you’ll run future Rails commands. If you wanted to generate a model or run a migration, you would run them in the same way.
Modify the Routes File
Remove the get 'pages/home'
line near the top of config/routes.rb
and replace it with the following:
root 'pages#home'
If you go back to your browser, you should see the new home page we have set up.
Adding a New Job
Use the following to add a new job:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker-compose run --user "$(id -u):$(id -g)" drkiq rails g job counter
Modifying the Counter Job
Next, replace the perform
function in app/job/counter_job.rb
to look like this:
def perform(*args) 21 + 21 end
Modifying the Pages Controller
Replace the home
action in app/controllers/pages_controller.rb
to look like this:
def home # We are executing the job on the spot rather than in the background to # exercise using Sidekiq in a trivial example. # # Consult with the Rails documentation to learn more about Active Job: # http://edgeguides.rubyonrails.org/active_job_basics.html @meaning_of_life = CounterJob.perform_now end
Modifying the Home View
The next step is to replace the app/views/pages/home.html.erb
file to look as follows:
<h1>The meaning of life is <%= @meaning_of_life %></h1>
Compile Assets
With everything ready, we should precompile the CSS and JavaScript code and use webpack to optimize them. This saves up bandwith and improves user’s experience:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" $ docker-compose run --user "$(id -u):$(id -g)" drkiq rails webpacker:install $ docker-compose run --user "$(id -u):$(id -g)" drkiq rails assets:precompile
Restart the Rails Application
You need to restart the Rails server to pick up new jobs, so hit CTRL+C
to stop everything, and then run docker-compose up
again.
If you reload the website you should see the changes we made.
Experimenting on Your Own
Here are three things you should do to familiarize yourself with your new application:
h1
All of these things can be done without having to restart anything, so feel free to check out the changes after you have performed each one.
Adding Some Tests
We can add some testing code to our application. Having tests will help us detect failures and weed out bugs.
Rails will search for test files in the test
directory.
Create a test for the CounterJob job. Create a file called test/job/counter_job_test.rb:
require 'test_helper' class CounterJobTest < ActiveJob::TestCase test "returns 42" do assert_equal 42, CounterJob.perform_now end end
Let’s add a second test for the Pages controller. Create a file called test/controllers/pages_controller_test.rb
:
require 'test_helper' class PagesControllerTest < ActionDispatch::IntegrationTest test "should get home" do get "/" assert_response :success end end
To run the tests:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" $ docker-compose run --user "$(id -u):$(id -g)" drkiq rails test ... Finished in 4.850950s, 0.4123 runs/s, 0.4123 assertions/s. 2 runs, 2 assertions, 0 failures, 0 errors, 0 skips
Before continuing, ensure that all your code is checked in GitHub:
$ git add -A $ git commit -m "initial commit" $ git push origin master
Continuous Integration for Docker projects on Semaphore
With the help of Docker, we created a portable environment that we can share with other developers. In this section, we’ll learn how we can build Docker images to deploy to production.
Continuous Integration (CI) is a software development practice that creates a strong feedback loop that encircles coding and testing. When we make a modification to the code, the CI system picks it up and runs it through aCI Pipeline. The pipeline builds and tests the code and we get an immediate result.
Prerequisites
We’ll need additional services to build and test the Docker images in a scalable way:
- Docker Hub :
- Create a free account using the Get Started button. Docker Hub provides unlimited public repositories for free.
- Semaphore :
- Head to Semaphore and sign up using the TRY it Free button. Use your GitHub account to log in.
Next, we have to tell Semaphore how to connect with your Docker Hub account:
- Go to your Semaphore account.
- On the left navigation menu, click on Secrets below Configuration:
- Click on Create New Secret.
- Create a secret called “dockerhub” with the following details:
DOCKER_USERNAME DOCKER_PASSWORD
- Click on Save Secret .
Production Images
Our Docker images work very well for development but are not suitable for production. For one thing, our images are not portable, they don’t contain our application code.
We’ll create new images that are independent and can be deployed anywhere.
Create a file called Dockerfile.production
with the following contents:
# Dockerfile.production FROM ruby:2.7 MAINTAINER maintainer@example.com ARG USER_ID ARG GROUP_ID RUN addgroup --gid $GROUP_ID user RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user ENV INSTALL_PATH /opt/app RUN mkdir -p $INSTALL_PATH RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg -o /root/yarn-pubkey.gpg && apt-key add /root/yarn-pubkey.gpg RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list RUN apt-get update && apt-get install -y --no-install-recommends nodejs yarn RUN gem install rails bundler COPY drkiq/Gemfile Gemfile WORKDIR /opt/app/drkiq RUN bundle install COPY drkiq/ . RUN chown -R user:user /opt/app USER $USER_ID RUN yarn install --check-files RUN rails webpacker:install RUN rails assets:precompile VOLUME ["$INSTALL_PATH/public"] CMD bundle exec unicorn -c config/unicorn.rb
If you compare the development and production Dockerfiles, you’ll find that the main differences are:
- We use the
COPY
command to copy all the code directory inside the container. - We run the install and setup commands for yarn and rails , so the image has all the assets precompiled.
Push the new Dockerfile to GitHub:
$ git add Dockerfile.production $ git commit -m "add dockerfile" $ git push origin master
Continuous Integration Pipeline
You can set up a CI pipeline with a few clicks:
- Open your Semaphore account.
- On the left navigation menu, click on the + (plus sign) next to Projects :
- Find your repository and click on Choose :
- Select the Docker starter workflow. Click on Customize it first :
The Workflow Builder main components are:
- Pipeline : A pipeline has a specific objective, e.g. building. Pipelines are made of blocks that are executed from left to right in an agent.
- Agent : The agent is the virtual machine that powers the pipeline. We have threemachine types to choose from. The machine runs an optimizedUbuntu 18.04 image with build tools for many languages.
- Block : blocks group jobs that can be executed in parallel. Jobs in a block usually have similar commands and configurations. Once all jobs in a block complete, the next block begins.
- Job : jobs define the commands that do the work. They inherit their configuration from their parent block.
Build Block
The Build block creates our Docker images:
- Click on the Build block.
- Open the Secrets section and check the dockerhub item. This will import your Docker Hub credentials to all job in the block.
- Open the Prologue section and type the following commands in the box:
checkout echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
- Click on the build job, rename it to “Build drkiq”.
- Type the following commands in the box:
docker pull $DOCKER_USERNAME/dockerizing-ruby-drkiq:latest || true docker build -t $DOCKER_USERNAME/dockerizing-ruby-drkiq:latest --cache-from=$DOCKER_USERNAME/dockerizing-ruby-drkiq:latest --build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g) -f Dockerfile.production . docker push $DOCKER_USERNAME/dockerizing-ruby-drkiq:latest
Already, we’re learning so much about how a Pipeline works:
- Prologue : the prologue is executed before each job in the block. Typically, it holds common set up commands.
- Checkout : checkout clones the GitHub repository into the CI environment.
- Docker login : connects the CI environment with your Docker Hub account.
- Docker Pull/Push : copies the image from and to Docker Hub. Before building, we copy the last image so we can benefit from Docker’s cache optimization.
To run the pipeline:
- Click on Run the Workflow button.
- Set the branch to master .
- Click on Start .
- The pipeline starts immediately:
When complete, head over your Docker Hub account, you should see a brand new image called dockerizing-ruby-drkiq :
Build Block Revisited
We’ll modify the build block to create the Nginx image:
- Click on Edit Workflow on the top-right corner.
- Click on the Build block.
- Click on + Add another job :
- Set the name of the job to “Build nginx”.
- Type the following commands in the box:
docker pull $DOCKER_USERNAME/dockerizing-ruby-nginx:latest || true docker build -t $DOCKER_USERNAME/dockerizing-ruby-nginx:latest --cache-from=$DOCKER_USERNAME/dockerizing-ruby-nginx:latest -f Dockerfile.nginx . docker push $DOCKER_USERNAME/dockerizing-ruby-nginx:latest
- Click on Run the Workflow and Start .
This time the pipeline builds our two images:
Running Tests
A CI pipeline wouldn’t be complete without tests. We’ll add a test job to check that the images are working.
The test block will:
- Pull the images built in the previous block
- Create the docker volumes
- Initialize the database
- Run the Rails tests using Docker Compose.
First, lets create a testing only Docker Compose file called docker-compose.test.yml
:
# docker-compose.test.yml version: "3.7" services: postgres: image: postgres:12.1 environment: POSTGRES_USER: drkiq POSTGRES_PASSWORD: test_db_password ports: - '5432:5432' volumes: - drkiq-postgres:/var/lib/postgresql/data redis: image: redis:5.0.7 ports: - '6379:6379' volumes: - drkiq-redis:/var/lib/redis/data drkiq: image: $DOCKER_USERNAME/dockerizing-ruby-drkiq:latest links: - postgres - redis ports: - '8010:8010' env_file: - .env sidekiq: image: $DOCKER_USERNAME/dockerizing-ruby-drkiq:latest command: bundle exec sidekiq links: - postgres - redis env_file: - .env nginx: image: $DOCKER_USERNAME/dockerizing-ruby-nginx:latest links: - drkiq ports: - '8020:8020' volumes: drkiq-postgres: drkiq-redis:
The development and test compose files are pretty similar. The only difference is that, in test we reference the image by name, we don’t build the image. We do this because we want to test the images that were uploaded to DockerHub, we don’t want to build new ones.
Push the new file to GitHub:
$ git pull origin master $ git add docker-compose.test.yml $ git commit -m "add test docker compose" $ git push origin master
To add a test block:
- On Semaphore, click on Edit Workflow .
- Click on the + Add Block dotted box to create a new block.
- Set the name of the block to “Tests”
- Open the Prologue section and type the following commands. The prologue holds the initialization commands we used earlier:
checkout cp env-example .env cat docker-compose.test.yml | envsubst | tee docker-compose.yml echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin docker pull $DOCKER_USERNAME/dockerizing-ruby-drkiq:latest || true docker-compose run --user "$(id -u):$(id -g)" drkiq rake db:reset docker-compose run --user "$(id -u):$(id -g)" drkiq rake db:migrate
- Open the Secrets section and check the dockerhub item.
- Change the name of the job to “Rails Test”.
- Type the following command in the job’s box:
docker-compose run --user "$(id -u):$(id -g)" drkiq rails test
- Use the Run the Workflow and Start buttons.
After a few seconds we should have the images tested:
Where to Go Next?
Congratulations! You’ve finished dockerizing your Ruby on Rails application.
Dockerizing your application is the first step towards portable and easy deployments. There are many ways of running your dockerized applications:
- Hosted : if you have a server or a virtual machine, you can use Docker Compose to run your application.
- PaaS : you can use a Platform as a Service such as Heroku to run your Docker containers. Check the tutorials below for more information.
- Kubernetes : you can run your containers using an orchestration platform like Kubernetes. Kubernetes provides scalability, great flexibility and no-downtime upgrades. For more information check the tutorials below and read the Docker and Kubernetes introduction page.
P.S. Would you like to learn how to build sustainable Rails apps and ship more often? We’ve recently published an ebook covering just that — “Rails Testing Handbook”. Learn more and download a free copy .
Read next:
To learn how to deploy to Kubernetes check these tutorials:
- Sign up to receive a free ebook guide to CI/CD with Kubernetes
- How to Release Faster with Continuous Delivery for Google Kubernetes
- Continuous Integration and Delivery to AWS Kubernetes
- CI/CD for Spring Boot Microservices
- How To Build and Deploy a Node.js Application To DigitalOcean Kubernetes Using CI/CD
- Continuous Deployment with Google Container Engine and Kubernetes
- Lightweight Docker Images in 5 Steps
To learn about how to deploy to Heroku:
以上所述就是小编给大家介绍的《Dockerizing a Ruby on Rails Application》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
人人都是产品经理
苏杰 / 电子工业出版社 / 2010年4月 / 45.00元
这是写给“-1到3岁的产品经理”的书,适合刚入门的产品经理、产品规划师、需求分析师,以及对做产品感兴趣的学生,用户体验、市场运营、技术部门的朋友们,特别是互联网、软件行业。作为一名“4岁的产品经理”,作者讲述了过去3年的经历与体会,与前辈们的书不同,本书就像你走到作者身边,说“嗨哥们!晚上有空吃个饭么,随便聊聊做产品的事吧”,然后作者说“好啊”。 书名叫“人人都是产品经理”,是因为作者觉得过......一起来看看 《人人都是产品经理》 这本书的介绍吧!