Terraform

DevOps Hands-on Guide: Deploying Nodejs application on AWS with Terraform and GitHub Actions

Deploy a Node.js application on AWS with Terraform and GitHub Actions. Our hands-on DevOps guide simplifies cloud deployment and improves efficiency.

Ganesh Mani
Ganesh Mani
June 22, 2024
DevOps Hands-on Guide: Deploying Nodejs application on AWS with Terraform and GitHub Actions

Introduction

This series is going to be a complete hands-on to learn DevOps with real-world scenarios. In this guide, you’ll learn how to deploy a simple Node.js application on AWS using the power of Terraform and the automation capabilities of GitHub Actions. By the end of this tutorial, you’ll have a robust, automated deployment pipeline that takes advantage of AWS’s scalable infrastructure.

Prerequisites

Before we dive into the guide, i assume that you have basic knowledge on Nodejs, AWS EC2 and Terraform. If you’re new to all these concepts, i would recommend to go through basics before we get started with the scenario.

Terraform: https://developer.hashicorp.com/terraform/tutorials/aws-get-started/aws-build

Nodejs: https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/Introduction

AWS EC2: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

Setting up Node.js application

Let’s create a simple nodejs application to handle API requests.

mkdir nodejs-aws-ec2
cd nodejs-aws-ec2
npm init --yes

Install the dependancies to handle http requests.

npm install express

After that, create index.js and add the following code

const express = require('express');

const app = express();

app.get('/', (req, res) => {
  res.send('Hello World!!!');
});

const PORT = process.env.PORT || 4500;

app.listen(PORT, () => {
  console.log(`Server is running on port ${PORT}`);
});

Since this is simple nodejs application, we will not be connecting to database and typescript. We will see how to deploy Typescript application in upcoming parts.

Initialize Terraform

Now, we can provision resources to deploy our web server in AWS EC2. Create infra directory and add the following versions.tf module.

versions.tf

terraform {
    required_version = ">= 1.2.0"

    required_providers {
      aws = {
        source  = "hashicorp/aws"
        version = "~> 3.0"
      }
    }
}

provider "aws" {
  region = "us-east-1"
  profile = "default"
}

Here, we start by creating an infra directory that houses our Terraform files. The versions.tf file specifies the minimum required version of Terraform that defines AWS provider with compatible version. Additionally, we configure the AWS provider to target the us-east-1 region with a default profile, setting the stage for provisioning resources in the specific AWS region.

Understanding VPC in AWS

A Virtual Private Cloud (VPC) in AWS is a virtual network dedicated to your AWS account, providing complete control over your virtual networking environment. It allows you to launch AWS resources, such as EC2 instances, in a logically isolated space, ensuring secure and efficient communication. Within a VPC, you can define your own IP address ranges, create subnets, configure route tables, and manage network gateways.

Subnets

Subnets are subdivisions within a VPC that allow you to segment your network for organizing and securing your resources. Each subnet is associated with a specific availability zone, providing fault tolerance and redundancy. Subnets can be classified as public or private based on their accessibility from the internet. Public subnets have a route to an internet gateway, making them accessible from the internet, whereas private subnets do not, restricting direct internet access and enhancing security for sensitive resources.

Route Tables

Route tables are essential components within a VPC that contain rules, known as routes, to direct network traffic. Each subnet in a VPC must be associated with a route table, which determines how data packets are forwarded within the VPC and to external networks. Route tables can include routes to the internet, other subnets, virtual private gateways (for VPN connections), and more. By customizing route tables, you can control the flow of traffic to ensure efficient and secure communication within your VPC.

Security Groups

Security groups act as virtual firewalls for your AWS resources, controlling inbound and outbound traffic at the instance level. Each security group consists of rules that specify allowed traffic based on protocols, ports, and source/destination IP ranges. Unlike traditional firewalls, security groups are stateful, meaning they automatically allow return traffic for established connections. By carefully configuring security group rules, you can enhance the security posture of your VPC, ensuring that only authorized traffic reaches your instances.

VPC with Simple Analogy

Let’s understand VPC with a simple analogy, Consider VPC as an amusement park.

Subnets

Subnets are different areas or zones within the amusement park:

Public Subnet: This is like the main entrance area of the park where visitors first arrive. It’s open to everyone and has attractions or information booths accessible to the public.

Private Subnet: This is like the staff-only area where maintenance happens and supplies are stored. Only authorized personnel can access this area, keeping it secure from the general public.

Internet Gateway

The internet gateway is like the main entrance gate of the amusement park. It allows visitors (internet traffic) to enter the public areas of the park but doesn’t allow them direct access to the staff-only areas.

Route Tables

Route tables are like the park’s internal maps and signage that direct visitors to different attractions:

• They determine the paths visitors can take to get from the main entrance to various attractions.

• They ensure visitors know where they can and cannot go, maintaining order within the park.

Security Groups

Security groups are like security checkpoints at the entrance to each attraction:

• They control who can enter and what they can bring with them.

• They ensure that only visitors with proper tickets (permissions) can access specific rides or areas.

• They keep the attractions safe by only allowing authorized actions and visitors.

Terraform Config

Now, we have firm understand of what VPC is and components around VPC. Let’s write our main configuration file to provision an AWS instance within VPC along with configuring Nginx.

main.tf

data "aws_ami" "ubuntu" {
    most_recent = true
    owners = ["099720109477"]

    filter {
        name = "name"
        values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
    }
}

data "aws_key_pair" "nodejs-ec2-key-pair" {
    key_name = "nodejs-ec2-instance-kp"
}

resource "aws_instance" "nodejs-ec2-instance" {
    ami = data.aws_ami.ubuntu.id
    instance_type = "t2.micro"
    key_name = data.aws_key_pair.nodejs-ec2-key-pair.key_name
    tags = {
        Name = "nodejs-ec2-instance"
    }

    vpc_security_group_ids = [aws_security_group.nodejs-ec2-instance-sg.id]
    user_data = file("init.tpl")

    tags_all = {
        Name = "nodejs-ec2-instance"
    }

    lifecycle {
      prevent_destroy = true
    }
}

resource "aws_default_vpc" "default" {

}

resource "aws_security_group" "nodejs-ec2-instance-sg" {
    name = "nodejs-ec2-instance-sg"

    vpc_id = aws_default_vpc.default.id


    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

Data Sources

data "aws_ami" "ubuntu" {
    most_recent = true
    owners = ["099720109477"]

    filter {
        name = "name"
        values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
    }
}

It retrieves the most recent Amazon Machine Image (AMI) for Ubuntu 20.04 from the AWS account with owner ID 099720109477 (Canonical, the official Ubuntu publisher).

data "aws_key_pair" "nodejs-ec2-key-pair" {
    key_name = "nodejs-ec2-instance-kp"
}

After that, we are defining information about an existing AWS key pair named nodejs-ec2-instance-kp.

resource "aws_instance" "nodejs-ec2-instance" {
    ami = data.aws_ami.ubuntu.id
    instance_type = "t2.micro"
    key_name = data.aws_key_pair.nodejs-ec2-key-pair.key_name
    tags = {
        Name = "nodejs-ec2-instance"
    }

    vpc_security_group_ids = [aws_security_group.nodejs-ec2-instance-sg.id]
    user_data = file("init.tpl")

    tags_all = {
        Name = "nodejs-ec2-instance"
    }

    lifecycle {
      prevent_destroy = true
    }
}

Here, we are creating an AWS instance with atrributes such as

• ami: The ID of the Ubuntu AMI retrieved earlier.

• instance_type: The type of instance to create (t2.micro).

• key_name: The key pair to use for SSH access.

• tags: Tags for identifying the instance.

• vpc_security_group_ids: The security group to associate with the instance.

• user_data: The initialization script to run on the instance (from init.tpl file).

• lifecycle.prevent_destroy: Prevents accidental destruction of the instance.

resource "aws_default_vpc" "default" {
}

Once we create EC2 instance, we configure default VPC to ensure that the default VPC is available for use.

resource "aws_security_group" "nodejs-ec2-instance-sg" {
    name = "nodejs-ec2-instance-sg"

    vpc_id = aws_default_vpc.default.id

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

Finally, we are configuring AWS Security group that restrict inbound and outbound traffic to the VPC.

name: The name of the security group.

vpc_id: The ID of the VPC in which the security group is created (default VPC).

ingress: Inbound rules allowing SSH (port 22) and HTTP (port 80) access from any IP.

egress: Outbound rules allowing all traffic.

Automating Server Setup with an Initialization Script

In the previous sections, we set up the infrastructure for an EC2 instance. To make the server ready for hosting a Node.js application, we use an initialization script (init.tpl). This script automates the installation and configuration of necessary software components on the EC2 instance. Here’s a detailed explanation of what each part of the script does:

#!/bin/bash

This shebang line tells the system to run the script using the Bash shell.

sudo apt-get update -y

The above command updates the package lists on the instance to ensure the latest versions of the software are available for installation.

sudo apt install -y nginx

Once updated, it installs Nginx, a powerful web server, and reverse proxy server.

sudo apt-get install -y nodejs npm
sudo ufw allow 'Nginx Full'

Next, it install nodejs and npm and configures the firewall to allow full traffic for Nginx, ensuring the server can handle HTTP and HTTPS requests.

sudo apt-get install git -y

The above command Installs Git to connect and deploy the source code from GitHub directly. In future, we need that to configure CI/CD to deploy the application automatically.

Configuring Nginx

To set up Nginx to serve our Node.js application, we need to create and enable a custom Nginx configuration.

# Remove the default Nginx configuration
sudo rm /etc/nginx/sites-enabled/default

It removes the default Nginx configuration to avoid conflicts with our custom settings.

# Create a new Nginx configuration file
cat << 'EOF' | sudo tee /etc/nginx/sites-available/nodejs-app
server {
    listen 80;
    server_name your_domain_or_ip;

    location / {
        proxy_pass <http://localhost:4500>;  # Adjust the port if your Node.js app runs on a different port
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
EOF

The above script creates a new Nginx configuration file for our Node.js application. This configuration sets up Nginx to listen on port 80 and proxy requests to the Node.js application running on port 4500. You should replace your_domain_or_ip with your actual domain or IP address.

# Enable the new Nginx configuration
sudo ln -s /etc/nginx/sites-available/nodejs-app /etc/nginx/sites-enabled/

It creates a symbolic link to enable the new Nginx configuration.

# Restart Nginx to apply the new configuration
sudo systemctl restart nginx

Restarts Nginx to apply the new configuration changes.

Installing PM2

Finally, we install PM2, a process manager for Node.js applications.

# Install PM2 globally
sudo npm install -g pm2

Installs PM2 globally using npm. PM2 helps manage and keep the Node.js application running smoothly by handling tasks like process management and automatic restarts.

This initialization script ensures that your EC2 instance is ready to serve a Node.js application with Nginx as a reverse proxy. By automating the installation and configuration of essential software, this script saves time and reduces the potential for errors during manual setup. With Nginx handling incoming traffic and PM2 managing the Node.js application, you have a robust setup for deploying web applications on AWS.

Running Terraform Init and Apply

Once you have the Terraform configuration file, the next steps are to initialize your working directory and apply the configurations to provision the infrastructure. This is done using the terraform init and terraform apply commands.

Terraform Init

Running terraform init initializes your Terraform working directory. This command downloads the necessary provider plugins specified in your configuration file and sets up the backend to store the state of your infrastructure. Here’s a simple command to run it:

terraform init

Terraform Apply

After initializing, you apply the configuration to create or update the infrastructure with terraform apply. This command shows you the execution plan and prompts for approval before making any changes. Here’s how to run it:

terraform apply

Together, these commands prepare and deploy your infrastructure as code, ensuring a smooth and repeatable process for managing your cloud resources.

Automating Deployment with GitHub Actions

Manually running the deployment every time can be tedious and error-prone. This is where Continuous Integration and Continuous Deployment (CI/CD) come into play. By setting up a CI/CD pipeline using GitHub Actions, we automate the process of deploying our Node.js application to AWS EC2. This ensures that every time we push changes to our repository, the latest version of our application is automatically deployed without any manual intervention. This not only saves time but also minimizes the risk of human error, ensuring a consistent and reliable deployment process.

GitHub Workflow for Node.js Deployment on AWS EC2

The following GitHub workflow automates the deployment of a Node.js application to an AWS EC2 instance whenever changes are pushed to the main branch of a specified directory in your repository.

name: Nodejs AWS EC2 Workflow

on:
  push:
    branches:
      - main
    paths:
      - 'nodejs-aws-ec2/**'

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code on the runner
        uses: actions/checkout@v2

      - name: Deploy to EC2
        env:
          PRIVATE_KEY: ${{ secrets.PRIVATE_KEY }}
          HOST_NAME: ${{ secrets.HOST_NAME }}
          USER_NAME: ${{ secrets.USER_NAME }}
          REPO_URL: ${{ secrets.GIT_URL }}

        run: |
          echo "$PRIVATE_KEY" > private_key && chmod 600 private_key
          ssh -o StrictHostKeyChecking=no -i private_key ${USER_NAME}@${HOST_NAME} << 'EOF'
            # Navigate to the home directory
            cd /home/ubuntu

            # Check if the repository directory exists
            if [ ! -d "terraform-realworld-exercises" ]; then
              echo "Repository not found. Cloning... $REPO_URL"
              git clone $REPO_URL
            else
              echo "Repository found. Pulling latest changes..."
              cd terraform-realworld-exercises
              git fetch origin main
              git reset --hard origin/main
            fi

            # Navigate to the app directory and install dependencies
            cd /home/ubuntu/terraform-realworld-exercises/nodejs-aws-ec2/app
            sudo npm install

            # Restart the application
            pm2 stop nodejs-app
            pm2 start index.js --name "nodejs-app"

          EOF

Workflow Config

  1. Trigger:
    • The workflow is triggered by a push to the main branch, specifically for changes in the nodejs-aws-ec2 directory.
  2. Job Setup:
    • The job named deploy runs on the latest Ubuntu runner.
  3. Steps:
    • Checkout Code: Uses the actions/checkout@v2 action to clone the repository code onto the runner.
    • Deploy to EC2:
      • Sets up environment variables using secrets for secure access.
      • Creates a private key file and sets appropriate permissions.
      • Connects to the EC2 instance using SSH.
      • Checks if the repository exists on the instance; if not, it clones the repository. If it exists, it pulls the latest changes.
      • Navigates to the application directory and installs necessary dependencies using npm install.
      • Uses PM2 to stop any running instance of the application and then restarts it with the latest code.

This workflow ensures that your Node.js application is automatically deployed and updated on your EC2 instance whenever you push changes to your main branch, streamlining the deployment process and reducing manual intervention.

Checkout the complete code in the GitHub Repository

Conclusion

In this hands-on guide, we demonstrated how to deploy a Node.js application on AWS using Terraform and GitHub Actions. By leveraging Terraform for infrastructure as code and GitHub Actions for CI/CD automation, we created a robust, scalable, and automated deployment pipeline. This ensures efficient, consistent, and reliable deployments, setting a solid foundation for advanced DevOps practices. Continue exploring and integrating more tools and services to further enhance your cloud deployment strategies. Happy deploying!