Initialise your app
$ mkdir my-app && cd my-app
$ yarn init -y
Then go into the generated package.json and add koa
as a dependency…
Making a Koa server
If you haven’t used it yet Koa is a shinier Express, basically it’s node framework for making servers. The most simple version of a Koa server will display any string you give it in the browser.
First create an index.js file…
$ touch index.js
…and add this to it…
ctx.body
is the body of our response. So running this will print “booom” onto your webpage. Nice.
Pretty standard node stuff so far. Running yarn
will install everything, and we can then run the server with node index.js
.
$ yarn
$ node index.js
Bada-bing bada…
But this way, we’ve also got a node_modules folder in our app’s directory that we didn’t ask for. It’s big and smelly and our machines get full of them and they take ages to delete.
Why not put them in containers? That way we know where they are, can easily delete them from one spot. Also the running node command will run as a daemon that we get logs from as and when we need to.
Adding package.json scripts
We’ll add another couple of things to the package.json before we get to the Docker part.
Add nodemon
to the dependencies so the server can listen to file changes and restart when we update the code.
Also add a new script so we can use nodemon
to run the app with yarn start
…
Making our YAML file
We’ll use docker-compose
to manage the docker part of the app. Make a docker-compose.yml file that looks like this…
Let’s go through each line step-by-step.
version: "3.7"
— every docker compose file has to have this. It’s thedocker-compose
version you want to use. 3.7 is just the current stable version. You can match thedocker-compose
version with thedocker
version you’re using by checking the compatibility matrix from Docker’s website.services:
— this is where you list the containers you’re going to use. Yaml files are indent sensitive, so anything afterservices:
with more indents is considered a child of this section.node-app-no-modules:
— the name of the service, can be anything you want to call it. When you dodocker ps
this name will be listed under theNAMES
column.image: node:14-alpine
— the foundation of a container is its image. This image can be found on Dockerhub here. It’s built from Alpine a super-lightweight Linux distribution great for running inside containers which need to be as light as possible. So Alpine is the first image then Node built another image on top of that which also includes Node 14.commmand: sh -c "yarn && yarn start"
— this is the same as runningdocker exec
. This is a terminal command you run inside the container after it’s built. Usually your terminal auto runsbash
orsh
when you open it but you can’t take anything for granted withcommand
andexec
so rather than just puttingyarn && yarn start
you have to say run it withsh
(Alpine usessh
rather thanbash
). Also remember we put the"script":
in our package.json earlier, this is where the app usesnodemon
.ports: 3000:3000
— mapping ports allows the browser to look at port 3000 on our OS which will plug into the container’s port 3000. If the app was set toapp.listen(1234)
then we could still map it to our OS’ port 3000 by doingports: 1234:3000
and we’d still go to http://localhost:3000 in the browser.working_dir: /app
— inside the container we’d start at the root directory which is/
. We can give the container a working directory to tell it not to put all our files into the root, instead put them in a folder called app instead. You don’t really have to do this but it’s tidier.volumes: ...
— this part is important. We want bind the files in our project with the files in the container. That way, when they change the container’s files will also change. Let’s explain a bit further…
Bind Volumes
We’re using what Docker calls “bind volumes” to link our files with the ones in the container.
volumes:
- ./index.js:/app/index.js
This first “bind volume” item says, take the index.js file in this directory and bind it with the index.js file that’s inside the container’s working_dir
.
We set the working directory to /app
earlier so we know that docker-compose
will put a version of index.js in there for the container.
The reason we’re binding index.js is because it might change when we update code. We don’t want to re-run the container all over again when we update stuff. Bind mounts give us a little window into the container while it’s running.
The next two files are just other files that might change while the container is up.
volumes:
- ./index.js:/app/index.js
- ./package.json:/app/package.json
- ./yarn.lock:/app/yarn.lock
I could just add the current directory ./:/app/.
but then we’d also bind node_modules and the folder would appear in my local dir.
If the app grows I could make a src dir to put all my code then just bind that instead of using index.js: ./src:/app/src
.
Drum roll please…
docker-compose up
will download the node:14-alpine
image, build it, install the node_modules, bind our files and expose the app to port 3000.
You can also run it as a daemon by using the -d
flag.
$ docker-compose up -d
You’ll notice no node_modules in your directory, also nodemon
will restart the server for you so you can update index.js and the server will update as well.
If you want logs you can run docker logs node-app-no-modules
and you’ll see the latest logs as if they were from your ordinary terminal output…
[Note: docker might append a number to your service/container name so if docker logs <service-name>
doesn’t work try running docker ps
to get the actual name of the running container.]
Or if you’re using a Mac the Docker dashboard app can show you the logs in real-time…
This basically turns Docker into a kind of garbage collector for your local dev environment. It can keep any auto-generated files you want at run-time but don’t want afterwards.
You could also use it for snapshots, test-generated fodder or log files.
All you have to do is use volumes:
to control the files you want to keep and exclude the ones you don’t.