Hire the author: Stephen I
Hello Everyone, My name is Ilori S. I am a full-stack (Backend Heavy) software developer with 3 years of experience in building web applications and microservices. In today’s lesson, I will be showing you how to dockerize a nodejs app using Nginx as a reverse proxy.
Node.js is a javascript runtime built on chrome’s v8 engine. Node.js is a very powerful technology to have under one’s belt. While Nginx on the other hand is a very powerful web server that can be used as a reverse proxy, load balancer, mail proxy, and HTTP Cache.
Docker on the other hand is quite a handful and I started playing with this technology last year. The thing about Docker is this. It makes it easier for web developers to create, deploy and run applications by using containers.
We will dive into the codes in a bit, but let’s discuss why you’d want to dockerize your app and use nginx in the first place.
Why Dockerize Your App With Nginx As A Reverse Proxy?
Nodejs on its own is very powerful and fast. If used correctly, your application might never suffer a downtime. NodeJs can easily handle 10,000 concurrent HTTP requests. Amazing isn’t it? But there is the question. Why use Nginx if Node sounds fine on its own and why dockerize them too?
There are some situations where there could be one too many requests for your nodejs application to handle. During this time, you start thinking of scaling up. That’s where docker comes into play. You could start multiple instances of your nodejs app using docker.
But the question becomes how do you manage multiple instances of your nodejs application? You will be needing some sort of load balancer and Nginx in my case is the best bet. Because with this, you can easily manage multiple instances through one entry-point.
The diagram best describes this explanation. No worries, we will get practical soon.
Glossary
There are terms or technical jargon I will be using in this lesson. So I think it’s better we get clear on them before we start.
- Image: An image in this context can be seen as a file that contains application code, libraries, tools, dependencies, and other files needed to make an application run. Yeah, and this isn’t your regular png or jpg file.
- Container: A container is a running instance of an image.
- Proxy server: According to wikipedia, a proxy server is a server application or appliance that acts as an intermediary for requests from clients seeking resources from servers that provide those resources.
With those terms clear, I think it’s a good time to start talking about the requirements for this lesson.
Project Requirements
In order to make sure we get the best out of this lesson, the following requirements need to be satisfied. This lesson is also targeted at beginners, or mid-level developers with docker and NodeJs.
- Basic Nodejs Knowledge: In order to get the best of this lesson, it is advisable that you are at least familiar with NodeJs and you understand how to spin up a web-server from ExpressJs. This project depends heavily on Javascript.
- Basic Docker Knowledge: I also think it is important that you are comfortable with using some docker CLI commands.
- Docker & Docker Compose: This should have been the first in the list but heck it. Make sure you have docker and docker-compose installed on your machine. The project depends heavily on docker && docker-compose. You can install docker from this link.
- Internet Connection: You need to have an active internet connection so we will pull some binaries and source files from both docker hub and github.
- A Text Editor && Some Optimism: A good old text editor and some optimism are all that we need to start.
Drum roll, please. It’s time to get our hands dirty!
1. Downloading Our Source Code
The source code for this project is hosted on github as I mentioned earlier. So you can clone the source code from this github repository. You can also fork the repo if you are interested in contributing to this project.
If you’ve successfully cloned the repo, you should have a folder structure that looks exactly like the one I shared in the image below. And by image, I mean the regular Png or Jpegs.
Give yourself a pat on the back. You’ve successfully completed Git Essentials 101. Next up is installing our project dependencies from our dear NPM.
P.S: Some of the files in the folders from the image has been configured so you don’t have much to worry about. This is so because I don’t want us to divert into managing configs and stuff. Let me know if I talk too much
2. Installing Our Project Dependencies
Our project needs a few dependencies in order to do what it is supposed to. So we’d install our project dependencies using npm. If you are using VsCode, you can open your terminal and do the installation by running the command below.
$ npm install
If you are not using Vscode, you can open a new terminal window using either cmd, powershell, or terminal depending on your environment and navigate to the project directory and run the same command above.
With the installation out of the way, it’s time to do some little configuration and have this running.
3. Project Configuration
In this section, we will be configuring our project. There is a config folder with a few files and all of these files load in configs from the .env file in our project root directory. You are probably wondering which .env file is important here. But the .env and the .env.testing is the only thing we need to worry about here.
#Application Configurations | |
APP_KEY= | |
APP_NAME=Martian_Mongodb | |
APP_VERSION=1.0.0 | |
APP_URL=http://martian_mongo_flavour | |
APP_PORT=4400 | |
APP_ENV=development | |
NODE_ENV=development | |
#Database Configurations | |
DATABASE_HOST=martian_mongo_db | |
DATABASE_PORT=27017 | |
DATABASE_NAME=martian_database | |
#Redis Configurations | |
REDIS_PORT=6379 | |
REDIS_HOST=martian_redis_db | |
#Mail Configurations | |
MAIL_FROM=info@martian.com | |
MAIL_HOST=smtp.mailtrap.io | |
MAIL_PORT=2525 | |
MAIL_USERNAME= | |
MAIL_PASSWORD= | |
#Token Configurations | |
JWT_USER_HEADERS= | |
JWT_ADMIN_HEADERS= | |
JWT_USER_SECRET= | |
JWT_ADMIN_SECRET= | |
#Twilio Config | |
TWILIO_ACCOUNT_SID= | |
TWILIO_AUTH_TOKEN= | |
#Aws Configurations | |
AWS_SECRET_KEY= | |
AWS_ACCESS_KEY= | |
AWS_REGION= |
Navigate to the Token COnfigurations in the .env.testing file and edit the JWT_USER_SECRET key with some values. There is also a key APP_KEY declared at the very top of your .env.testing file. You should populate the key there with some values as well.
We will be sending some test emails with this application so you can edit the Mail Configurations section with your SMTP configs. You can actually use mail trap. I recommend it because it gives you the feeling of having your emails in a live environment.
At the time of writing this, the scratch server this lesson was built upon is still undergoing series of developments so there is a high chance that the next version is more updated than this. But there is always going to be backward compatibility with previous versions. Worry not lad.
4. Database Models
Data, what can we do without them? This lesson uses MongoDb as its primary database. You do not need to install MongoDB on your machine. Docker is going to take care of that automatically. You also don’t need to worry about the database configurations. That has been taken care of as well.
What you need to know is, inside the models folder, there are two files namely Newsletters.js and Users.js. We will be registering some demo users to our newsletter program and we will also simulate registering and authenticating demo users.
Your Newsletter model should look exactly like the code below.
And your User model should look exactly like the code snippet below.
/* Base Model & Mongoose ORM */ | |
const Mongoose = require('mongoose'); | |
/** | |
* This Class contains the schema required to model a user collection. It also contains reusable methods imported from the mongoose library. | |
* | |
* @author Ilori Stephen A | |
* @returns {Object} | |
* @name Signup | |
* @alias Register | |
* @param {Null} | |
* | |
*/ | |
class User { | |
user() { | |
const Schema = Mongoose.Schema; | |
const UserBluePrint = new Schema({ | |
firstName: { | |
type: String, | |
required: [true, "The name field is required"] | |
}, | |
lastName: { | |
type: String, | |
required: [true, "The name field is required"] | |
}, | |
email: { | |
type: String, | |
unqiue: true, | |
required: [true, "The email field is required"] | |
}, | |
status: { | |
type: Number, | |
default: 0, | |
/* 0: Pending, 1: Approved, 2: Suspended. */ | |
}, | |
password: { | |
type: String, | |
required: [true, "The password field is required"] | |
}, | |
createdAt: { | |
type: String, | |
default: global.Date() | |
}, | |
updatedAt: { | |
type: String, | |
default: global.Date() | |
} | |
}); | |
const User = Mongoose.model.Users || Mongoose.model('Users', UserBluePrint); | |
return User; | |
} | |
} | |
module.exports = new User().user(); |
That was easy. Let’s take a look at some of our validators.
5. Validators
Every request that comes in our application needs to be verified at some point else our application will be very vulnerable to all sorts of attacks and it’s a cold world out there.
Navigate to the validators folder and you will find two files. An Auth.js file and a Validator.js file. The Validator.js file in this case, is our Base Validator and all other validators extend this validator.
I think it’s a lot better to have your validator defined somewhere rather than having them around in some of your controllers or helpers. However, I am still open to whatever you guys think is better.
Our Base validator should look like the code snippet provided below. Our Base validator class has two methods validateEmail and fetchAppConfigs. The validateEmail method is available to validate, and check the existence of users email address. The fetchAppConfigs method is responsible for loading environment variables defined in our .env.testing file.
/** | |
* This is the base validator class. All other validator classes extends this class thus sharing resuable methods. | |
* | |
* @author Ilori Stephen A <stephenilori458@gmail.com> | |
* @returns {Object} | |
* @name Validator | |
* @param {Null} | |
* | |
*/ | |
const AppConfigs = require('../config/App'); | |
const _Validator = require('validator'); | |
class Validator { | |
/* Calling The Galaxy For Help! */ | |
async validateEmail(Payload, Model) { | |
const Response = { status: false, errors: {} }; | |
try { | |
if (!_Validator.isEmail(Payload.email)) { | |
Response.status = true; | |
Response.errors.email = 'Please, enter a valid email address'; | |
} | |
const checkEmail = await Model.findOne({ email: Payload.email }).lean(); | |
if (checkEmail) { | |
Response.status = true; | |
Response.errors.email = 'Sorry, this email address is not available.'; | |
} | |
return Response; | |
} catch (e) { | |
Response.status = true; | |
Response.errors.server = 'Sorry, an unexpected error occurred and your request could not be processed.'; | |
return Response; | |
} | |
} | |
fetchAppConfigs() { | |
return AppConfigs(); | |
} | |
} | |
module.exports = Validator; |
I have also added a snippet of what our Auth validator would look like below.
const CryptoJs = require('crypto-js'); | |
const BaseValidator = require('./Validator'); | |
class Auth extends BaseValidator { | |
constructor() { super(); } | |
async login(Payload, Model) { | |
const Response = { status: false, errors: {} } | |
try { | |
const checkAccount = await Model.findOne({ email: Payload.email, status: true, deletedAt: null }).lean(); | |
if (!checkAccount) { | |
Response.status = true; | |
Response.errors.account = 'Invalid Auth Credentials. Please, try again.'; | |
} | |
/* Check If The Account Exists & The Password Matches */ | |
if (checkAccount) { | |
const hashedPassword = CryptoJs.AES.decrypt(checkAccount.password, super.fetchAppConfigs().appKey).toString(CryptoJs.enc.Utf8); | |
if (hashedPassword !== Payload.password) { | |
Response.status = true; | |
Response.errors.account = 'Invalid Auth Credentials. Please, try again.'; | |
} | |
} | |
return Response; | |
} catch (e) { | |
console.log(e); | |
Response.status = true; | |
Response.errors.server = 'Sorry, an unexpected error occurred and your request could not be processed.'; | |
return Response; | |
} | |
} | |
async register(Payload, Model) { | |
const Response = { status: false, errors: {} }; | |
try { | |
if (Payload.firstName == '') { | |
Response.status = true; | |
Response.errors.firstName = 'Sorry, the first name field is required.'; | |
} | |
if (Payload.lastName == '') { | |
Response.status = true; | |
Response.errors.lastName = 'Sorry, the last name field is required.'; | |
} | |
const checkAccount = await Model.findOne({ email: Payload.email, status: true, deletedAt: { $ne: null } }).lean(); | |
if (!checkAccount) { | |
Response.status = true; | |
Response.errors.account = 'Invalid Auth Credentials. Please, try again.'; | |
} | |
if (Payload.password.length < 7) { | |
Response.status = true; | |
Response.errors.password = 'Sorry, please use a stronger password.'; | |
} | |
return Payload; | |
} catch (e) { | |
Response.status = true; | |
Response.errors.server = 'Sorry, an unexpected error occurred and your request could not be processed.'; | |
return Response; | |
} | |
} | |
} | |
module.exports = new Auth(); |
Our Auth.js file has two methods login and register. These methods are responsible for validating the login method and signup method on their respective controllers.
Our validators wrapped up in one bit. Time to dive into those controllers.
6. Controllers
I hope you aren’t thinking about Drake’s song controlla? at this point? It’s a good song but this is even better. There is a controllers folder with 3 subfolders in our project. We will start with the app folder and then the auth folder.
Inside of the app folder, there is a Newsletter.js file and if you look inside the auth folder, we have a Login.js and a Signup.js file. Below is a code snippet of the Newsletter.js file.
The Newsletter.js loads in other files like the Base Controller, The Newsletter Model, The Mail Handler, and the Base Validator Class. The Newsletter.js class has just one method which accepts two parameters (req, res) which is handled by express.
The Login controller in the auth folder has one method. The login method and this method is responsible for authenticating users and assigning a JWT auth token to them. There is a Github gist below to show what the Login Controller looks like.
const Jwt = require('jsonwebtoken'); | |
const Controller = require('../Controller'); | |
const UserModel = require('../../models/Users'); | |
/* Validators */ | |
const Validators = require('../../validators/Auth') | |
class Login extends Controller { | |
constructor() { super(); } | |
async login(req, res) { | |
try { | |
const Body = req.body; | |
const Validator = await Validators.login(Body, UserModel); | |
if (Validator.status) { | |
return super.response(res, | |
400, | |
'There are some errors in your request. Please, try again.', {}, | |
Validator.errors); | |
} | |
/* Login The User */ | |
const User = await UserModel.findOne({ email: Body.email }).lean(); | |
const Token = Jwt.sign({ | |
data: { | |
access: 'user-level', | |
phone: User.phone, | |
email: User.email | |
} | |
}, super.fetchAppConfigs().jwtSecretUser, { expiresIn: '3days' }); | |
return super.response(res, 200, 'Login Successfull', { user: User, token: Token }); | |
} catch (e) { | |
return super.response(res, 500, 'An unexpected error occurred. Please, try again.', {}, { server: 'Operation Failed' }); | |
} | |
} | |
} | |
module.exports = new Login(); |
The signup controller also follows the same pattern. I have also attached a Github gist to show what this controller looks like as well.
const Jwt = require('jsonwebtoken'); | |
const CryptoJs = require('crypto-js'); | |
const Controller = require('../Controller'); | |
const UserModel = require('../../models/Users'); | |
/* Validators */ | |
const Validators = require('../../validators/Auth') | |
class Signup extends Controller { | |
constructor() { super(); } | |
async signup(req, res) { | |
try { | |
const Body = req.body; | |
const Validator = await Validators.register(Body, UserModel); | |
if (Validator.status) { | |
return super.response(res, | |
400, | |
'There are some errors in your request. Please, try again.', {}, | |
Validator.errors); | |
} | |
/* Create The User */ | |
let newUser = new UserModel({ | |
firstName: Body.firstName, | |
lastName: Body.lastName, | |
email: Body.email, | |
status: 1, | |
password: CryptoJs.AES.encrypt(Body.password, super.fetchAppConfigs().appKey).toString(), | |
}); | |
const User = await newUser.save(); | |
const Token = Jwt.sign({ | |
data: { | |
access: 'user-level', | |
phone: User.phone, | |
email: User.email | |
} | |
}, super.fetchAppConfigs().jwtSecretUser, { expiresIn: '3days' }); | |
return super.response(res, 200, 'Registration Successfull', { user: User, token: Token }); | |
} catch (e) { | |
return super.response(res, 500, 'An unexpected error occurred. Please, try again.', {}, { server: 'Operation Failed' }); | |
} | |
} | |
} | |
module.exports = new Signup(); |
I’m glad that is out of the way. It’s time to show you what’s in the routes folder.
7. Registering Routes
This isn’t routes from the popular dreamwork Cars animation. It’s ExpressJs Routes actually. Our Routes callback method is loaded from the methods defined in our controllers. This Routes function is then now exported to our application entry-point. The Gihub gist below shows the content of our Routes.js file.
const Middleware = require('../middleware/JwtMiddleware'); | |
const Newsletter = require('../controllers/app/Newsletter'); | |
const Welcome = require('../controllers/Welcome'); | |
const Signup = require('../controllers/auth/Signup'); | |
const Login = require('../controllers/auth/Login'); | |
/** | |
* All the API endpoint or routes to mars is loaded here. You can load in routes from anywhere but it's best that you load them in from the controllers. | |
* | |
* @author Ilori Stephen <stephenilori458@gmail.com> | |
* @param {Null} | |
* @returns {Function} Express | |
* @name Routes | |
* @alias ApplicationRoutes | |
* | |
*/ | |
module.exports = (App) => { | |
/* Are we still on earth? */ | |
App.get('/api/v1/welcome', Welcome.whatYearIsIt); | |
App.post('/api/v1/newsletter', Newsletter.registerEmail); | |
/* Auth Routes */ | |
App.post('/api/v1/login', Login.login); | |
App.post('/api/v1/register', Signup.signup); | |
/* In need of some inspiration? */ | |
App.all('/api/v1/inspire', (req, res) => { | |
res.status(200).send('You can build anything you set your mind to!'); | |
res.end(); | |
return; | |
}); | |
} |
That was fast, wasn’t it? In the next chapter of this lesson, we will be talking about setting up Docker and configuring Nginx.
8. Docker Configs
In the root directory of this project, there is a file called Dockerfile with no extension and the content of the file looks like the Github gist provided below.
FROM node:15.12.0-alpine3.10 | |
WORKDIR /usr/src/app | |
COPY package*.json ./ | |
RUN npm install | |
COPY . . | |
EXPOSE 4400 | |
CMD ["node", "index.js"] |
If you ask me, I think a Dockerfile is a blueprint that shows you how a docker image will be formed and what the resulting container will be able to achieve.
Our Dockerfile above shows that our new Docker image will be formed from the node:15.12.0-alpine3.10 image which will reduce the size of the new image we intend to create since alpine images are popular for coming in small sizes. It also goes on to explain that a new work directory will be created inside of the resulting container with the path /usr/src/app.
The file also hints that the package.json and the package-lock.json file in the root directory of our project will be copied to the usr/src/app destination or path inside of the resulting container.
The file also indicates that our project dependencies will be installed when the file is being converted into a docker image.
To give a summary of what happens next, our application source code or binaries will be copied to the usr/src/app destination or path inside of the resulting container. It also hints that a PORT 4400 will be exposed from the resulting container since Docker makes every container behave like a computer with a network interface.
And finally, our container will run the command npm index.js which automatically starts up our application. That was a lot but and there is more.
The Nginx Dockerfile
Instead of having these files lying around in our project root directory, it would have made more sense to put them in a folder and utilize them from there. But I wanted to make this lesson quick and easy to achieve. Our Nginx Dockerfile is actually very small and it is also created off of a smaller Nginx Alpine Docker image.
After that, we replace the default nginx configuration in the imported image with ours. Our nginx.Dockerfile is shown below.
FROM nginx:1.19.8-alpine | |
COPY ./nginx.conf /etc/nginx/nginx.conf |
However that’s not everything that makes up our Nginx image. I will shine more light on our nginx.conf file in the next chapter.
The Nginx Config
The nginx.conf is the most important file in this lesson. Without this file, traffic won’t be load balanced or distributed to our Nodejs instances at all. I have attached a Github gist in the next line to show what this file looks like.
events { worker_connections 1024; } | |
http { | |
#upstream servers | |
upstream martian_servers { | |
server martian_martian_mongo_flavour_1:4400; | |
server martian_martian_mongo_flavour_2:4400; | |
server martian_martian_mongo_flavour_3:4400; | |
} | |
# configuration for the nginx server | |
server { | |
# configuration | |
listen [::]:3050; | |
listen 3050; | |
# Proxying the connections | |
location /api { | |
proxy_pass http://martian_servers; | |
proxy_http_version 1.1; | |
proxy_set_header Connection ""; | |
} | |
location / { | |
proxy_pass http://martian_servers; | |
proxy_http_version 1.1; | |
proxy_set_header Connection ""; | |
} | |
} | |
} |
We have two important blocks in this file and they give instructions on how the Nginx engine will operate.
The events block is where our connection processing operations are specified. It contains a worker_connections which sets the maximum number of simultaneous connections that can be opened by our Nginx worker process.
The http block in this context provides the configuration file context in which the HTTP server directives are specified. Inside of this http block, we have upstream_block with the name martian_servers and this contains the server names of our nodejs instances mapped to their ports. Any incoming traffic will be distributed evenly to each of these instances.
The server block specifies what port our nginx server would run on and it also specifies how our nginx server would handle incoming traffic based on the request path.
The Docker Compose File
version: "3.9" | |
services: | |
martian_mongo_flavour: | |
build: . | |
env_file: | |
- .env.testing | |
links: | |
- martian_mongo_db | |
- martian_redis_db | |
depends_on: | |
- martian_mongo_db | |
- martian_redis_db | |
volumes: | |
- ".:/usr/src/app" | |
martian_redis_db: | |
image: redis:6.2.1-alpine | |
container_name: martian_redis_db | |
ports: | |
- "6379:6379" | |
martian_mongo_db: | |
image: mongo:latest | |
container_name: martian_mongo_db | |
ports: | |
- "27017:27017" | |
command: --quiet | |
martian_mongo_nginx: | |
build: | |
context: . | |
dockerfile: nginx.Dockerfile | |
depends_on: | |
- martian_mongo_flavour | |
ports: | |
- "3050:3050" |
Our docker-compose.yml looks exactly like the github gist above and it contains several blocks which instruct the docker engine on how to build our application. You will find that inside of our docker-compose.yml file, we have a redis service and a MongoDB service. Our docker-compose file will fetch the image needed to build those services and cache them the first we run the command
$ docker-compose up
You will also find that the .env.testing file we modified earlier is loaded into our martian_mongo_flavour service.
9. Rounding Up
Now that we have finished setting up our application, the next thing we need to do is start it and confirm if it works. In order to do that, we will run the command
$ docker-compose up --scale martian_mongo_flavour=3
If you have issues running this command, you can try the command below.
$ sudo docker-compose up --scale martian_mongo_flavour=3
This command tells the docker-engine to create 3 instances of our martian_mongo_flavour which is our NodeJS instance. If done correctly, you have an output similar to the image below.
To verify that our Nginx instance works, open your postman and make a POST request to our Nginx endpoint like so http://127.0.0.1:3050/api/v1/newsletter. It looks as if Nginx is our NodeJs server but it’s not. Nginx would look out for the /api path in this request and serve the request to our martian_server upstream using a round-robin approach as this is the default.
If you make the request correctly in postman, you should have something similar to the image below.
When you keep making subsequent or repeated requests, you will find that Nginx actually moves the traffic around evenly between each of the servers specified in our upstream.
The image below proves this.
In conclusion, that’s how we ended up using Nginx as our reverse proxy for our NodeJs application. You will also find that we have MongoDB running on our machine even though you might not have it installed.
Learning Tools
- I recommend Brad Traversy’s Youtube Crash Course On Docker.
- I recommend Bret Fisher’s Udemy Docker Mastery Course.
- Practice, Practice & Practice.
Learning Strategy
I used the learning tools above to get to achieve this. I watched a lot of Bret Fisher videos about Docker and after that, I put in practice of what I learned.
Reflective Analysis
Docker is really an amazing tool and makes work much easier for you as a developer. With Docker, you can easily manage your application dependencies and you can also easily manage your dependencies version using docker.
With Docker, you can easily manage multiple instances of your application and never worry about downtime. I also think once in a while, if you are not too busy, it’d be great if you could go through their documentation. The world is moving fast and we all need to stay on top of our game.
Conclusion
You don’t have to dockerize everything. I think you should only Dockerize enterprise projects. It makes no sense to bring an armored tank to a fistfight. There are also other technologies like Kubernetes that can help you achieve the same thing. The source code for this project is always on Github and you can always go ahead to make a PR anytime. Thank you for reading. Once again, my name is Ilori Stephen Adejuwon.
Great Article here, really gave me more insight on how to dockerize my node applications, and how to use docker to spin up multiple instances of my application. All routed from nginx, really a great one to learn
Would be great to see how one can learn futher, how to easily setup node apps, to be able to access and manage database data, how to expose ports with nginx on a cloud server and get the app running there.