Deploying a Load-Balanced To-Do List App with Docker, AWS, and Nginx

The following is a very indulgent exercise (I wanted to spend some more time working with AWS, Docker, and DevOps in general).

Background:

Let’s imagine a world in which everyone was using pen-and-paper (and maybe sticky notes) to keep track of their to-do lists but no one had thought to write an app to do that.

But you’re a smart guy/gal and decide to build an app to do just that. After a couple hours, you have a pretty barebones to-do list running locally on your Macbook Air. When you’re done with a task, you can check it off, or you can delete it. There’s no database (the app relies on localStorage in the user’s browser). You haven’t deployed it yet – it only exists on your local machine and in a private GitHub repo.

Screen Shot 2016-05-17 at 5.02.01 PM

At some point, a buddy of yours sees you using this to-do list — let’s call it Tudoo. He wants to use it, too. So you end up deploying Tudoo, serving it up on a single machine. Now your friend (and the rest of the office) can use it. This works for about a week.

Word quickly spreads, and now you’re getting thousands upon thousands of users simultaneously going to Tudoo. Your single machine deploy can’t handle the load of that much traffic. There’s a lot of downtime. People are getting annoyed.

Screen Shot 2016-05-17 at 5.07.15 PM

But this is good – this means there is high demand for what you’ve built. You may need to start thinking more long-term with your development goals. How do you allow for more concurrent connections to your site? How do you continue to build without worrying that what worked in development might not work in production? Where do you get started with all this if you’re a young developer?

Roadmap:

In the following post, I’ll take some basic steps to resolving some of the issues above (or at least start resolving them). These steps will generally look like this :

  1. Setting up development on an EC2 instance
  2. Dockerizing the app
  3. Deploying the app to a load-balanced cluster of machines
  4. Stress-testing the app

Of course, I’m not saying this is the 100% correct way to do this, but it is one way I’ve seen it done for apps operating at a much larger scale than Tudoo. You wouldn’t actually do something like this for something as trivial as a to-do list (unless, of course, you needed to).

I chose to develop on a remote EC2 instance to maintain a consistent, predictable development environment, and so I wouldn’t have to use up any of the disk space of my local machine, but you definitely don’t have to do this. (I just wanted to get familiar with doing things this way).

We’re containerizing our app using Docker so that we don’t run into any incompatibilities from development to production. If you’re on a Mac, installing Docker Toolbox sets you up with a lightweight Docker Virtual Machine, Docker daemon+client, and Docker-Compose. Since I’m developing on an Amazon Linux machine, things will be slightly different.

And we’re load-balancing our app with an Nginx reverse proxy so that all traffic doesn’t end up hitting just one server (and thus overwhelming it).

Taking these measures will ultimately speed up further development and allow us to extend Tudoo’s functionality more reliably.

OK, let’s begin.

Developing on an EC2 Instance

Reasoning

To reiterate, this part is overkill — I just wanted to get practice doing this. Why is it overkill? Because the very reason someone would actually do this is basically taken care of by using Docker. Docker virtualizes the operating system so that, regardless of whether your local development machine is a Mac or Windows or whatever, the container in which your app runs sits atop a Linux distro.

Some valid reasons why someone would use a remote dev machine, even if they’ve taken care of the above by using Docker?

  • Now your local computer doesn’t eat up any more of the hard drive installing global dependencies! You just install everything on that remote EC2 instance.
  • You also now happen to have backups in case anything goes weird. Your source code is on Github, your environment is on AWS, your images are on Docker Hub, and you have local versions of the code on your local machine if you so choose to keep it locally.
  • I’ve spoken to freelancers who use this setup when working with multiple clients with varying tech needs.

Launching an EC2 instance

(I’m assuming you already have a non-root user set up in Identity & Access Management aka IAM. If you don’t, check out this article on how to do that!).

(1) Log in to your non-root admin user account’s AWS console.(https://console.aws.amazon.com)

Screen Shot 2016-05-20 at 5.18.52 PM

(2) Set up a security group with rules that expose ports 22 (SSH), 80 (HTTP), 443 (HTTPS), and 8080 (custom TCP rule). This security group, when associated with an EC2 instance, will let AWS know to allow inbound requests to those ports. Security Groups are located in the Network/Security section of the EC2 Management console.

Screen Shot 2016-05-20 at 5.24.07 PM

(3) Configure a new EC2 instance to run a t2.micro Amazon Linux AMI box, associating with it the security group you created in step 2. You can start the process of launching a new instance by going to the Instances section of the EC2 Management console.

Screen Shot 2016-05-20 at 5.26.53 PM

Screen Shot 2016-05-20 at 5.27.55 PM

Screen Shot 2016-05-20 at 5.28.49 PM

(4) Launch the instance, making sure to associate your non-root user’s key-value pair. Take note of your instance’s IP address and/or public DNS! These will come in handy later when preparing to work off of the instance. Now you have an EC2 instance launched (it might take a couple minutes for it to get up and running)!

create-key-pair
From Yevgeniy Brikman’s awesome blog

(5) SSH into your EC2 instance from your terminal.

/*
assuming this is where you saved your .pem file
when you created a non-root user in IAM.
*/

cd ~/.ssh

/*
grant 'Read by Owner' access privileges on the .pem file
*/

chmod 400 YOUR_AWS_CREDENTIALS_FILENAME.pem

/*
SSH into your EC2 instance by passing your .pem file
in as an identity file with which to authenticate the connection.
Now is the time to go dig up your EC2 instance's
IP address or public DNS 
*/

ssh -i YOUR_AWS_CREDENTIALS.pem ec2-user@YOUR_EC2_IP_ADDRESS

After entering that last command you should see something like this:

Screen Shot 2016-05-20 at 5.50.42 PM

(6) Install dependencies on your EC2 instance and set up proper user privileging.

/*
YUM is a Red Hat Linux-based package manager
(similar to apt-get on Ubuntu). We'll be installing
emacs, git, and docker, then firing up the Docker service
*/
sudo yum update -y
sudo yum install -y emacs
sudo yum install -y git
sudo yum install -y docker
sudo service docker start

/*
The following command appends ec2-user as an
authorized user in the login group for docker
*/

sudo usermod -a -G docker ec2-user

/*
Now exit! This will end your connection to your EC2 instance.
*

exit 

 

Dockerizing the App

Again, the above sequence is not necessary at all, so I won’t focus too much on it moving forward. But if you were ever interested or are planning on developing this way, now you know 🙂

If you’re developing on your local Mac, make sure that you have the Docker Toolbox installed and that you know how to start up your Docker Machine. If not, here’s a useful resource to figure out how.

If you’re working on a remote EC2 instance, I assume you know your way around emacs. If not, here’s a good place to start.

Reasoning for Docker

docker-containers-vms

Docker takes advantage of Linux containers, virtualizing the operating system and thus providing a much more lightweight way to simulate multiple app instances running on one machine. For more on this, here’s a great resource for beginners to Docker.

What I’ve found particularly awesome about Docker is that, in development, you can run multiple containers that communicate with each other (via Docker networking).

So if you wanted to load-balance your app, putting 2 app instances behind an Nginx reverse proxy, you can set up 3 different containers (1 container running Nginx and 2 containers running instances of your app).

You can now test a nice simulation of the very infrastructure you’d be using in production while in development. And then when preparing this load-balanced setup for production, you don’t have to change much. (At least, as far as I’ve encountered.)

But I’m getting ahead of myself. Let’s get started on building a Docker image, running a Docker image, and then pushing that Docker image up to Docker Hub.

Building a Docker Image

A Docker image is basically a blueprint for how you want your container to be built. To build an image, we first have to “sketch” the image using a Dockerfile. This is a good example of “Infrastructure as Code” — you have a DevOps-related setup file that gets parsed and executed, just like any other script.

Our first task is to build a Docker image for our NodeJS app. When we run this Docker image, we want it to be akin to typing “npm start” or “node server.js” as we usually do it (except our app will be running in a container).

(1) Review app’s directory structure

Just so we’re on the same page, my files are organized as such:

todo-app
-- node
   -- public (folder where all my assets/scripts go)
   -- server.js
   -- package.json
-- nginx
   -- nginx.conf (config file for nginx)
-- .dockerignore (similar to gitignore)
-- .gitignore

(2) Create a Dockerfile in the node directory

/*
Make sure you're in the node directory,
then create a Dockerfile for node.
You can just call it 'Dockerfile', but
I like giving it a more descriptive name
so that I don't get confused later
*/

cd node
touch node.dockerfile

(3) Open this Dockerfile in your favorite text-editor and get to coding.

Screen Shot 2016-05-20 at 6.27.43 PM

Let me walk through this.

FROM signifies which existing image to set as the base for everything else that follows. In this case, I’m saying “node:argon”, which is the Long-Term Support release of NodeJS, but you can also do “node:latest” or any specific release you’d like.

RUN is self-explanatory. This Docker image, when run, will create a directory called usr/src/app, and then set that directory as the working directory WORKDIR for the container.

COPY copies my package.json into my to-be-built container’s working directory, after which we RUN npm install, which installs all our dependencies in the container. Neato.

COPY then copies all of our app data in node/ to the working dir (usr/src/app) of the container.

EXPOSE (you guessed it) exposes the container’s own port 8080 for inbound requests.

Finally, CMD tells the container what to do once the image has finished creating the container. In this case, we just fire up our server with npm start.

To recap, the Dockerfile is an instruction set. When we build from the Dockerfile, we get a full-fledged blueprint called a Docker image. Then when we run that image, Docker creates a container and runs our app in it according to the blueprint that is the Docker image.

Dockerfile to build images_0

(3) Build a Docker image from the Node app’s Dockerfile!

/*
Make sure you're still in the node app's directory.
The -t flag tags your Dockerfile with a particular name.
I suggest naming it your-dockerhub-name/todo-app.

That dot at the very end is important! It is the "context" 
for your Docker image. Which means, in your Dockerfile, 
where are all the paths you're relying on relative to? 
In this case, just our current node/ directory.

(apologies for the multiple lines, it should all be one command)
*/

docker build -f node.dockerfile \
-t <user-namespace/image-name> .

/*
You should now see this image listed by your Docker client
*/

docker images

(4) Running a Docker image

/*
Now to run the image we've created, we do three things:

1) Flag it to run in the background (as a daemon '-d')
2) Configure it with flag '-p' to accept requests on the 
   host's port 80 (the HTTP port in the EC2 instance's Security 
   Group) and the container's port 8080 (the container's port
   that we exposed in the Dockerfile for our app).
3) Specify which image to run.

*/

docker run -d -p 80:8080 <user-namespace/image-name>

/*
We should see this running!
*/

docker ps

/*
And this isn't just a theoretical "running". Navigate to
YOUR_IP_ADDRESS:80 in Google Chrome, and your app will be live.
If you're on Mac/Windows and aren't sure what your Docker IP is,
type docker-machine ip to check. If you're developing on an EC2
instance, this is the IP address you used to SSH in.
*/

/*
If you want to see any more interesting data on your container,
you can run any of the commands followed by the first few letters
of the running container's ID (which can be found with 'docker ps')
*/

docker logs <ID> // displays server logs
docker inspect <ID> // displays info like location of container's volume

(5) Tearing down a Docker container

/*
You've had enough and want to close up the container? Use the first
few letters of the container's ID with the commands below.
*/

docker stop <ID>

docker ps // should show nothing
docker ps -a // should show container we just stopped

docker rm <ID> // kills container

docker ps -a // now should show nothing

(6) Pushing an image up to DockerHub

The added utility of Docker is that they have their own GitHub-like service. Your custom Docker images become accessible from anywhere once you've built them and pushed them up. Just as how GitHub provides a single source of truth for version-controlled code repositories, so does DockerHub for version-controlled image repositories.

This is huge for a couple reasons.

One, you can push new layers of your Docker image up, and anyone on your team can now pull the latest images down.

Second, you can pull down the latest images of anything, ranging from Ubuntu and Node to Nginx and any other convenient image that anyone has ever publicly published.

I assume you already have a DockerHub account. If not, go ahead and create one. Then follow along with the code below.

/*
Login and push your image up. Done.

You can now go up to hub.docker.com and see your image!

If you run into any issues, it's probably because you didn't
namespace your image to match your DockerHub account name. In
which case rebuild your Docker image with the appropriate
namespace and try this again.
*/

docker login // type in username, password, and email
docker push <user-namespace>/<image-name>

/*
What if you don't want to keep the images on your machine? They can
take up some space (500+ MB).
*/

docker images // take note of first few letters of image's ID
docker rmi <ID> // deletes image
docker images // image should now be gone

/*
What if you're working on a new machine and don't have the image
locally? Now that it's on DockerHub, you can just pull it down.
*/

docker pull <user-namespace>/<image-name>

At this point, if you try to visit the app at the IP address:port address, it will no longer connect. We have stopped + killed our container.

Moving on to greener pastures, we are now going to start the process of deploying the app to a load-balanced cluster of machines.

Running Load-Balanced App Instances Locally

Reasoning

Short version: just to show you how we can do this using Docker and AWS. A todo list doesn't need this at all.

Longer version: because a large app with many users trying to simultaneously access the same resources will drop to its knees if there isn't a way to manage the load.

So these are the steps we'll take to get this app load-balanced properly. I'm using Nginx because it is a lightweight and super powerful reverse proxy server. You could've also used Amazon's ECS service that provides a managed load balancer (Elastic Load Balancer aka ELB).

  1. Set up Nginx configuration
  2. Write and build an image for a container to run the Nginx server
  3. Create two different containers that run separate instances of the to-do list app
  4. Create a container for the Nginx service and link it to the two app containers that are already running. This will put the load-balancing action in motion.

(1) Set up Nginx config file

Screen Shot 2016-05-21 at 1.17.18 AM.png

I won't go through all of this, but I'll cover the main points. I'm specifying an HTTP service with upstream nodes node1 and node2, each serving on port 8080. I'm also indicating that this Nginx server will listen on port 80.

In other words, this Nginx server will listen on port 80 for requests, then forward them to either 'node1' or 'node2', variables that will come in use later.

The rest of the information here is important too, but it is related to things like how to delegate requests across the upstream nodes, whether to gzip requests or do anything else with request/response headers.

(2) Write and build an image for a container to run the Nginx server

The next step is to build a Docker image that will set us up to run this Nginx server the way we need it to. So obviously we need a Dockerfile. Below, I've created one in the nginx/ directory called nginx.dockerfile:

Screen Shot 2016-05-21 at 1.24.02 AM.png

This should be pretty straightforward.

FROM sets the base image as nginx, which is pulled locally or down from DockerHub.

RUN creates space for error log files in the container.

COPY copies the config file over to a location the container expects it to be in.

EXPOSE exposes container port 80, making it available for inbound requests.

Next step is to build an image for our Nginx server:

cd nginx // if not already in that folder yet

/*
The below should all be one continuous command. 
Don't forget that trailing dot to set the context
for the image to build from.
*/

docker build -f nginx.dockerfile \
-t <user-namespace>/my-nginx .

docker images // your-name/my-nginx should be there now

docker push <user-namespace>/my-nginx // let's get this up to DockerHub

(3) Run the load-balancer against two app instances on your development machine

/*
First, we run two separate instances of the to-do NodeJS app.
By tagging these containers with specific names, we can refer
to them later. I didn't specify an external port, just the
port I'm publishing for the container itself (8080).

Two things to note: 
- the app names are the same as what we have in our nginx config file
- the ports for these 'upstream nodes' is also the same
*/

docker run -d --name node1 -p 8080 <name-of-app-image>
docker run -d --name node2 -p 8080 <name-of-app-image>

docker ps // these two nodes should be running

/*
Notice what the below command does:
- Publish the nginx server to listen to the external port 80
  and the container's port 80
- Using Docker bridge networking, link the image's 'node1'
  env variable and resolve it with the alias to our 'node1'
  running container. Do the same for 'node2'.
*/

docker run -d --name nginx -p 80:80 --link node1:node1 \
node2:node2 <name-of-nginx-image>

docker ps // should see 3 running containers: node1, node2, nginx

If all went as planned, you can go to the IP address of where you're developing (either your dev EC2's IP address or your Docker Machine's IP) and see the app running on port 80!

All HTTP requests to the local machine's port 80 deals with the Nginx reverse proxy, which then passes the request off to either node1 or node2. To see how this works, a fun thing you can do is to make node2 serve a completely different app than node1 and see that sometimes the Nginx will forward you to node1 and other times to node2.

Screen Shot 2016-05-17 at 5.19.20 PM

This is all happening on the same machine, but it appears as if the Nginx reverse proxy is forwarding requests to two different machines (even though they are two containers running on the same machine).

Note: It was pretty annoying to have to manually run each of those containers. Docker Toolbox ships with something called Docker Compose that actually is kind of like a meta-Dockerfile. It abstracts all these individual commands into one file. I chose not to use it here, but it's definitely far more productive to use Docker Compose, which lets us do the above in a simple 'docker-compose up' and 'docker-compose down'.

Last step is to just stop and kill these running containers.

docker ps // take note of first few letters of all IDs
docker stop <nginx-id> <node1-id> <node2-id> // stops all three

docker ps // should see nothing
docker ps -a // should see node1, node2, and nginx recently stopped

docker rm <nginx-id> <node1-id> <node2-d> // kills all three
docker ps -a // should see nothing

Now when we actually want to deploy this across a cluster of different machines, this same abstraction doesn't have to be changed much.

Deploying the Load-Balanced App to AWS Using Docker Cloud

Because we've pushed our Docker images up to DockerHub, we're going to use this great tool called Docker Cloud. Docker Cloud allows you to provision and deploy EC2 (or the Azure flavor of EC2) instances with your Docker images running in containers on them.

This part is fairly straightforward thanks to Docker Cloud's awesome UI.

(1) Go to cloud.docker.com (log in if you haven't yet)

Screen Shot 2016-05-21 at 11.56.30 AM

(2) Go to 'Cloud Settings' and input your AWS credentials under Cloud Providers.

Screen Shot 2016-05-21 at 11.59.55 AM

(3) Launch a new 'Node Cluster'. Name it, pick your provider + instance type, and choose to launch 3 nodes (1 for Nginx, 2 for the app instances)

Screen Shot 2016-05-21 at 12.02.41 PM

(4) Create & deploy new 'Stack' of services. This is Docker Cloud's version of Docker Compose! It defines which services we want to run on our containers and deploys them across available nodes in our node clusters.

Screen Shot 2016-05-21 at 12.04.44 PM

Here's the Stackfile we need to add here.

Screen Shot 2016-05-21 at 12.05.49 PM.png

I am defining aliases for three services -- nginx, node1, and node2. For each, I specify which Docker image to associate with it, as well as ports to publish. I also link the nginx service to the two app services.

Looks vaguely like these commands we typed in manually a few steps ago:

docker run -d --name node1 -p 8080 <name-of-app-image>
docker run -d --name node2 -p 8080 <name-of-app-image>

docker run -d --name nginx -p 80:80 --link node1:node1 \
node2:node2 <name-of-nginx-image>

The Stackfile handles the above code for us in a more automated fashion.

Go ahead and create the Stack & deploy it. After a couple minutes, you'll see that our stack of services has been deployed across the available nodes in the cluster we defined earlier.

Screen Shot 2016-05-21 at 12.22.44 PM

If you scroll down, you'll see an endpoint for your nginx service. Go visit it. This is what I see now:

Screen Shot 2016-05-21 at 12.27.11 PM

Woohoo! We have successfully deployed our todo list that is being load balanced by an Nginx service.

Screen Shot 2016-05-17 at 5.09.42 PM

The advantage of this is that, as our needs grow, our infrastructure can easily grow with it. Using Docker Cloud, we can easily scale up and scale down the number of nodes deployed, or we can even set up a cluster of nodes in a different regions to allow for better availability.

The next steps here would be to use a custom domain name, but I think this is a good stopping point.

Conclusion

This is by no means comprehensive, and I've definitely flubbed some explanations above, but I hope this helps get you get over the "deployment jitters" and return your focus back on building great stuff.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s