内容简介:If you’ve ever had to deal with setting up a Node.js project in which you had to install a bunch of things – like MySQL/Postgres, Redis, etc. – and then run some setup scriptsthen you’ve likely experienced the pain of losing half a day – at least – to sole
If you’ve ever had to deal with setting up a Node.js project in which you had to install a bunch of things – like MySQL/Postgres, Redis, etc. – and then run some setup scripts just to be able to get the project running locally on your machine…
then you’ve likely experienced the pain of losing half a day – at least – to solely just getting set up.
This is especially frustrating and anxiety-inducing if you’re new to the team and want to start contributing right away, not waste time in the maze of steps you have to run, or waste time having to ask the team every 5 minutes how to get over the next install hurdle.
What’s worse is, as the project evolves, you might need to install more things, you might have more complex setup scripts, and (worst of all IMO) documentation for that setup might become out of date.
Rather than having to install a bunch of things – or figure out what you need to install in the first place, in case of bad documentation – there’s a much easier way that can get you up and running in as little as one or two commands.
Enter Docker Compose
Docker Compose gives us the ability to define install dependencies – like databases and other software – and run them within containers that your “main” code can interact with.
In order to best explain how to use Compose – and how to convert an existing project with local install steps, scripts, etc – I’ll use an example of a demo repo I wrote awhile back (that accompanied this post on designing reliable queues ).
When I originally built that project, it was using “the old way”, without Compose.
But I recently re-wrote it to use Compose for creating Redis and Postgres containers, and to be able to run the tests against those containers (using Compose is also really good for having local test databases).
New world and old world
First, let’s look at how the project was setup using “the old way”:
– first install Homebrew
– then install Postgres
– then create a “root” database
– then define the schema
– then run a script to install Redis
– then run a script to start Postgres
– then run a script to start Redis
That’s a lot of steps…
Now, let’s take a look at the steps involved using Docker Compose:
docker-compose up
…and that’s it.
How were we able to accomplish this?
Let’s look at how I converted this project over to using Compose.
Postgres
Instead of having to install Postgres (and Homebrew, if you didn’t already have it installed), and then define our database and schema , using Compose that becomes:
version: '3.7' services: db_queue: image: postgres:9.6.17 container_name: db_queue environment: POSTGRES_DB: library POSTGRES_USER: root POSTGRES_PASSWORD: password volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql - db-data:/var/lib/postgresql/data ports: - 5432:5432 volumes: db-data:
Note that the above is contained in the docker-compose.yml file in the root of our project.
Second note: you’ll need to have Docker installed on your machine in order to use Docker and Docker Compose .
We define our “install dependencies” within the services
section, in this case, Postgres.
Then we define the basic environment variables that Postgres needs to startup the database. In the old world, where we were creating the database from the command line via psql, here we just define it under POSTGRES_DB
.
The service’s volumes
section uses an initialize script (more on this in a second) and defines a database volume that gets “mounted” alongside the container. And we define that volume name using the “root” volumes
section, in this case using the name db-data
.
The reason we do that is so that if we bring down the “stack” using docker-compose down
, it won’t clear the schema definitions + data stored in the database. Note, if we want
to delete that information and bring it totally
down, we can use the command docker-compose down -v
, using the -v
flag for “volume”.
The init.sql (used to create the table schema as the container boots up) still needs to be created, but instead of you having to manually define the schema, the SQL script just gets leveraged by Compose instead. In other words, its automatic rather than manual, and removes a step for us.
And here’s what that init.sql script looks like:
CREATE TABLE books (book_number int, isbn text)
Lastly, we map the container port to the host machine port (the host machine being your machine itself), so that you can access the container from
your machine. That’s done in the service’s ports
section.
Redis
For Redis, it’s even simpler. In that same services
section, we do:
redis_queue: image: redis:5.0.6 container_name: redis_queue ports: - 6379:6379
Define the Docker Redis image to use, give the container a name, and map the ports. Simple.
Compared to the old world, where we had to run a script
to wget
to install Redis and build that code using make
, then start Redis using a separate
script
, the Compose way is much easier.
Leveraging the Compose containers
Real quick, here’s the entire docker-compose.yml
file in its entirety:
version: '3.7' services: redis_queue: image: redis:5.0.6 container_name: redis_queue ports: - 6379:6379 db_queue: image: postgres:9.6.17 container_name: db_queue environment: POSTGRES_DB: library POSTGRES_USER: root POSTGRES_PASSWORD: password volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql - db-data:/var/lib/postgresql/data ports: - 5432:5432 volumes: db-data:
Like I mentioned before, all we need to do to start the “stack” is to run docker-compose up
, and Docker will use the Compose file and services defined therein to spin up the containers.
Because we have the container ports mapped to the local machine, we can run the unit/integration tests using npm test
– nothing different we need to do.
You can also run the code against the containers, not just the tests. Simple.
Wrapping up
If you’re continuously bumping up against problems running your project locally, strongly consider using Docker Compose for this instead.
It makes defining a local “stack” for local development a lot simpler and more headache-free then installing a bunch of stuff on your machine. And in this post we’ve really only scratched the surface of what you can do. It can make your developer life SO much easier.
Knowing how to setup a project for easy local development is one hurdle… understanding how to structure your project is another. Want an Express REST API structure template that makes it clear where your logic should go? Sign up below to receive that template, plus a post explaining how that structure works / why it’s setup that way so you don’t have to waste time wondering where your code should go. You’ll also receive all my new posts directly to your inbox!
以上所述就是小编给大家介绍的《Simplifying local dev setup with Docker Compose》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
穿越计算机的迷雾
李忠 / 电子工业出版社 / 2011-1 / 36.00元
《穿越计算机的迷雾》从最基本的电学知识开始,带领读者一步一步、从无到有地制造一台能全自动工作的计算机。在这个过程中,读者可以学习到大量有趣的电学、数学和逻辑学知识,了解到它们是如何为电子计算机的产生创造条件,并促使它不断向着更快、更小、更强的方向发展。通过阅读《穿越计算机的迷雾》,读者可以很容易地理解自动计算实际上是如何发生的,而现代的计算机又是怎么工作的。以此为基础,在《穿越计算机的迷雾》的后面......一起来看看 《穿越计算机的迷雾》 这本书的介绍吧!