Serverless URL Shortener

Create a serverless URL shortener in AWS with Terraform

Let’s build together a serverless URL shortener in AWS using Terraform

Andre Lopes
Level Up Coding
Published in
18 min readApr 30, 2024

--

Photo by Danist Soh on Unsplash

Hey people!

In this article, I want to build a serverless URL shortener, like ShortURL or TinyURL, where you input a valid URL address and the application gives you a redirect URL with a small code attached.

Each piece of the infrastructure will be created and managed by Terraform and the redirection service will leverage caching to increase the response type of the redirection.

Terraform makes it much easier to build up and manage your application. I find it quite developer-friendly and makes the infrastructure blueprint more organized as I can know which services I have provisioned and can easily modify/remove the ones I don’t need.

The Project

Below is the project architecture:

Serverless URL shortener architecture

Our application will consist of:

  • API Gateway — The point of entrance for all our HTTP requests
  • Lambda Function — Serverless compute service that will run our code
  • DynamoDB — Serverless key-value database
  • DAX (DynamoDB Accelerator) — Caching service built for Amazon DynamoDB

The idea is that the clients will make a POST HTTP request to / with the following body:

{
"url": "https://www.example.com"
}

The application will generate a short URL, save it in DynamoDB, and then return a response to the client:

{
"id": "123",
"shortUrl": "https://my-domain/a12bcaA",
"url": "https://www.example.com"
}

Once the client calls the url with a GET , for example https://my-domain.com/a12bcaA, our application will check the cache for a stored record. It will get it from our DynamoDB table if it doesn't find it.

All the infrastructure will be built using Terraform infrastructure as code.

Let’s start

Now let’s start by creating our project folder and then start with our Terraform code to provision a few components in AWS.

Create our base infrastructure

Let’s start by creating a folder iac and set the providers to tell Terraform where to build our infrastructure. Create a providers.tf file:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}

backend "s3" {
bucket = "YOUR_BUCKET_NAME_HERE"
key = "url-shortener/state"
}
}

# Configure the AWS Provider
provider "aws" {}

Note that we have the backend set to s3 so Terraform can keep track of our changes. If you don’t need it, you can just remove this section.

Now let’s move to create our Terraform modules for creating lambda functions.

Creating lambdas

Create a folder iac/modules/lambda in the root path. Then create a variables.tf file where we’ll define our module’s input variables:

variable "name" {
description = "The name of the Lambda function"
type = string
nullable = false
}

variable "source_file_path" {
description = "The path to the source file code"
type = string
}

variable "policies" {
description = "The policies for this lambda."
type = list(string)
default = null
}

Now, create a datasources.tf file where we’ll store our data to be used:

data "archive_file" "lambda" {
type = "zip"
source_file = var.source_file_path
output_path = "${var.name}_lambda_function_payload.zip"
}

data "aws_iam_policy_document" "assume_role" {

statement {
effect = "Allow"

principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}

actions = ["sts:AssumeRole"]

}
}

data "aws_iam_policy_document" "policies" {
override_policy_documents = var.policies

statement {
effect = "Allow"
sid = "LogToCloudwatch"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
]

resources = ["arn:aws:logs:*:*:*"]
}
}

Here we are defining the archive file for our initial code and our initial policies to let the lambda assume an IAM role and log to Cloudwatch. We also set an override policy to allow us to pass more policies if we need to.

Now, create a main.tf file where we’ll define our lambda module code:

resource "aws_iam_role" "iam_for_lambda" {
name = "${var.name}-lambda-role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
inline_policy {
name = "DefaultPolicy"
policy = data.aws_iam_policy_document.policies.json
}
}

resource "aws_lambda_function" "lambda" {
filename = data.archive_file.lambda.output_path
function_name = var.name
role = aws_iam_role.iam_for_lambda.arn
handler = "index.handler"
runtime = "nodejs20.x"
}

And then an outputs.tf file where we export some values that we’ll need later:

output "arn" {
value = aws_lambda_function.lambda.arn
}

output "name" {
value = aws_lambda_function.lambda.function_name
}

output "invoke_arn" {
value = aws_lambda_function.lambda.invoke_arn
}

output "role_name" {
value = aws_iam_role.iam_for_lambda.name
}

Now that we have our module, let’s create our first lambda by first adding an initial code. This is necessary because you cannot create an empty lambda.

So, create a init_code folder under iac and add a file named index.mjs:

// Default handler generated in AWS
export const handler = async (event) => {
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};

return response;
};

Now create a lambdas.tf file where we’ll declare our lambda functions:

module "create_short_url_lambda" {
source = "./modules/lambda"
name = "create-short-url"
source_file_path = "./init_code/index.mjs"
}

module "redirect_lambda" {
source = "./modules/lambda"
name = "redirect"
source_file_path = "./init_code/index.mjs"
}

Creating the database

Now let’s generate our database by creating a dynamodb.tf file under the iac folder:

resource "aws_dynamodb_table" "urls" {
name = "urls"
billing_mode = "PROVISIONED"
read_capacity = 1
write_capacity = 1
hash_key = "ID"
range_key = "Code"

attribute {
name = "ID"
type = "S"
}

attribute {
name = "Code"
type = "S"
}

attribute {
name = "URL"
type = "S"
}

global_secondary_index {
name = "CodeIndex"
hash_key = "Code"
range_key = "URL"
projection_type = "ALL"
read_capacity = 1
write_capacity = 1
}
}

We set the default attributes to be

  • ID — An internal identifier for the URL
  • Code — The short code internally generated that is publicly visible and passed to in the URL. E.g: https://my-domain.com/123abc , 123abc is the code.
  • URL — Is the real URL that we map the code and redirect to

We created a global_secondary_index because we’ll need to query by the Code later.

Now we need to give our lambda function permissions to add data to our DynamoDB table.

Create a iam-policies.tf file under iac :

data "aws_iam_policy_document" "get_movie_item" {
statement {
effect = "Allow"

actions = [
"dynamodb:PutItem",
]

resources = [
aws_dynamodb_table.urls.arn
]
}
}

data "aws_iam_policy_document" "allow_get_url_lambda" {
statement {
effect = "Allow"

actions = [
"dynamodb:GetItem",
]

resources = [
aws_dynamodb_table.urls.arn
]
}
}

Then, in the lambdas.tf file, we need to pass this policy document to our create_short_url_lambda module:

module "create_short_url_lambda" {
source = "./modules/lambda"
name = "create-short-url"
source_file_path = "./init_code/index.mjs"
policies = [
data.aws_iam_policy_document.create_short_url_lambda.json
]
}

module "redirect_lambda" {
source = "./modules/lambda"
name = "redirect"
source_file_path = "./init_code/index.mjs"
policies = [
data.aws_iam_policy_document.create_short_url_lambda.json
]
}

API Gateway

Now we need to define our endpoint in our API Gateway. Let’s first create a module for our HTTP method. Create a folder api-method under iac/modules and add a variables.tf file:

variable "http_method" {
description = "The HTTP method"
type = string
}

variable "resource_id" {
description = "The ID of the resource this method is attached to"
type = string
}

variable "api_id" {
description = "The ID of the API this method is attached to"
type = string
}

variable "integration_uri" {
description = "The URI of the integration this method will call"
type = string
}

variable "resource_path" {
description = "The path of the resource"
type = string
}

variable "lambda_function_name" {
description = "The name of the Lambda function that will be called"
type = string
}

variable "execution_arn" {
description = "The execution ARN of the API"
type = string
}

Now create a main.tf file:

resource "aws_api_gateway_method" "method" {
authorization = "NONE"
http_method = var.http_method
resource_id = var.resource_id
rest_api_id = var.api_id
}

resource "aws_api_gateway_integration" "integration" {
http_method = aws_api_gateway_method.method.http_method
integration_http_method = "POST" # Lambda functions can only be invoked via POST
resource_id = var.resource_id
rest_api_id = var.api_id
type = "AWS_PROXY"
uri = var.integration_uri
}

resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = var.lambda_function_name
principal = "apigateway.amazonaws.com"
source_arn = "${var.execution_arn}/*/${aws_api_gateway_method.method.http_method}${var.resource_path}"
}

And now an outputs.tf file:

output "id" {
value = aws_api_gateway_method.method.id
}

output "integration_id" {
value = aws_api_gateway_integration.integration.id
}

With this, we can now create our API by creating a new file rest-api.tf under the iac folder:

# API Gateway
resource "aws_api_gateway_rest_api" "url_shortener_api" {
name = "url-shortener-api"
}

resource "aws_api_gateway_deployment" "url_shortener_api_deployment" {
rest_api_id = aws_api_gateway_rest_api.url_shortener_api.id

triggers = {
redeployment = sha1(jsonencode([
aws_api_gateway_rest_api.url_shortener_api.root_resource_id,
aws_api_gateway_resource.redirect_resource.id,
module.post_url_method.id,
module.post_url_method.integration_id,
module.redirect_url_method.id,
module.redirect_url_method.integration_id,
]))
}

lifecycle {
create_before_destroy = true
}
}

resource "aws_api_gateway_resource" "redirect_resource" {
parent_id = aws_api_gateway_rest_api.url_shortener_api.root_resource_id
path_part = "{redirectCode}"
rest_api_id = aws_api_gateway_rest_api.url_shortener_api.id
}

resource "aws_api_gateway_stage" "live" {
deployment_id = aws_api_gateway_deployment.url_shortener_api_deployment.id
rest_api_id = aws_api_gateway_rest_api.url_shortener_api.id
stage_name = "live"
}

module "post_url_method" {
source = "./modules/api-method"
api_id = aws_api_gateway_rest_api.url_shortener_api.id
http_method = "POST"
resource_id = aws_api_gateway_rest_api.url_shortener_api.root_resource_id
resource_path = "/"
integration_uri = module.create_short_url_lambda.invoke_arn
lambda_function_name = module.create_short_url_lambda.name
execution_arn = aws_api_gateway_rest_api.url_shortener_api.execution_arn
}

module "redirect_url_method" {
source = "./modules/api-method"
api_id = aws_api_gateway_rest_api.url_shortener_api.id
http_method = "GET"
resource_id = aws_api_gateway_resource.redirect_resource.id
resource_path = aws_api_gateway_resource.redirect_resource.path
integration_uri = module.redirect_lambda.invoke_arn
lambda_function_name = module.redirect_lambda.name
execution_arn = aws_api_gateway_rest_api.url_shortener_api.execution_arn
}

For our cache, add a cache.tf file with:

resource "aws_elasticache_serverless_cache" "urls_cache" {
engine = "redis"
name = "urls"
cache_usage_limits {
data_storage {
maximum = 1
unit = "GB"
}
ecpu_per_second {
maximum = 5000
}
}
description = "URLs cache"
major_engine_version = "7"
}

Note that ElastiCache Serverless is not covered by the Free Tier. It will charge a small quantity per GB per Hour. For example, in eu-central-1 is $0.151 / GB-hour . See https://aws.amazon.com/elasticache/pricing/ for more information.

Deploy

To deploy our lambdas, we are going to be using GitHub Actions.

So let’s create a .github/workflows folder under the root folder and add a deploy-infrastructure.yml file:

name: Deploy Infrastructure
on:
push:
branches:
- main
paths:
- iac/**/*
- .github/workflows/deploy-infra.yml

defaults:
run:
working-directory: iac/

jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v3

- name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1 # Add your region here

# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init

# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check

# Generates an execution plan for Terraform
- name: Terraform Plan
run: |
terraform plan -out=plan -input=false

# On push to "main", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false plan

Note that for this to work, you must add your AWS Access Key and Secret Access Key to your repository secrets. These values can be generated in AWS for a user with access to create the specific components in AWS.

Now you can push your code to GitHub and see your infrastructure be created.

Creating a short URL

Now let’s implement the lambda to create a short URL through the / endpoint.

Let’s create a folder apps/create-short-url and then initialize our TypeScript project with:

npm init -y

Then let’s add typescript to our project with:

npm i --save typescript

And then add the following tsconfig.json file:

{
"compilerOptions": {
/* Language and Environment */
"target": "esnext",

/* Modules */
"module": "nodenext",
"rootDir": "src",
"moduleResolution": "nodenext",
"resolveJsonModule": true,

/* Emit */
"outDir": "build",
"newLine": "lf",

/* Interop Constraints */
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,

/* Type Checking */
"strict": true,
"noImplicitAny": true,
"noImplicitThis": true,

/* Completeness */
"skipLibCheck": true
}
}

Now let’s add the libraries we need for our project

npm i --save @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
npm i -D @types/aws-lambda @types/node copyfiles

Now create a folder src and add a index.ts file with the implementation of our lambda:

import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, PutCommand } from '@aws-sdk/lib-dynamodb';
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';

const tableName = 'urls';
const baseUrl: string = process.env.BASE_URL || '';

type ShortUrlRequest = {
url: string;
};

type Response = {
id: string;
shortUrl: string;
url: string;
};

export const handler = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
let request: ShortUrlRequest;

try {
request = JSON.parse(event.body || '{}');
} catch {
return {
statusCode: 400,
body: JSON.stringify({
message: 'Invalid request body',
}),
};
}

if (!request || !request.url) {
return {
statusCode: 400,
body: JSON.stringify({
message: 'URL is required',
}),
};
}

console.log('Processing request ', request);

const client = new DynamoDBClient({});
const docClient = DynamoDBDocumentClient.from(client);

const id = crypto.randomUUID();
const code = generateCode();

const command = new PutCommand({
TableName: tableName,
Item: {
ID: id,
Code: code,
URL: request.url,
},
});

try {
await docClient.send(command);

const response: Response = {
id: id,
shortUrl: baseUrl + code,
url: request.url,
};

return {
statusCode: 201,
body: JSON.stringify(response),
};
} catch (e: any) {
console.log(e);

return {
statusCode: 500,
body: JSON.stringify({
message: e.message,
}),
};
}
};

// A simple implementation for generating an small hashID
function generateCode(): string {
const alphabet =
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
const length = 7;

var code: string[] = [];

for (var i = 0; i < length; i++) {
var randomIndex: number = Math.round(Math.random() * alphabet.length);

code[i] = alphabet[randomIndex];
}

return code.join('');
}

Here we have just a simple request validation, code generation, and item creation in DynamoDB.

Note that generateCode() is just a simple implementation for generating a hash ID. With this alphabet of 62 characters, we should get around 3,521,614,606,208 possible combinations.

Now let’s add our BASE_URL environment variable to our lambda by adding a variable to iac\modules\lambda\variables.tf file:

variable "name" {
description = "The name of the Lambda function"
type = string
nullable = false
}

variable "source_file_path" {
description = "The path to the source file code"
type = string
}

variable "policies" {
description = "The policies for this lambda."
type = list(string)
default = null
}

variable "environment_variables" {
description = "The lambdas environment variables."
type = map(string)
default = null
}

Then add it to our iac\modules\lambda\main.tf file with:

resource "aws_iam_role" "iam_for_lambda" {
name = "${var.name}-lambda-role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
inline_policy {
name = "DefaultPolicy"
policy = data.aws_iam_policy_document.policies.json
}
}

resource "aws_lambda_function" "lambda" {
filename = data.archive_file.lambda.output_path
function_name = var.name
role = aws_iam_role.iam_for_lambda.arn
handler = "index.handler"
runtime = "nodejs20.x"

environment {
variables = var.environment_variables
}
}

And finally, add the values to our module registration:

module "create_short_url_lambda" {
source = "./modules/lambda"
name = "create-short-url"
source_file_path = "./init_code/index.mjs"
policies = [
data.aws_iam_policy_document.create_short_url_lambda.json
]

environment_variables = {
BASE_URL = "YOUR_API_BASE_URL_HERE",
}
}

module "redirect_lambda" {
source = "./modules/lambda"
name = "redirect"
source_file_path = "./init_code/index.mjs"
policies = [
data.aws_iam_policy_document.create_short_url_lambda.json
]
}

Note to add your API base URL in the variable value.

Now we just need to add our workflow to deploy our lambda by creating a .github/workflows/deploy-create-short-url-lambda.yml file:

name: Deploy Create Short URL Lambda
on:
push:
branches:
- main
paths:
- apps/create-short-url/**/*
- .github/workflows/deploy-create-short-url-lambda.yml

defaults:
run:
working-directory: apps/create-short-url/

jobs:
terraform:
name: 'Deploy Create Short URL Lambda'
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v3

- name: Setup NodeJS
uses: actions/setup-node@v4
with:
node-version: 20

- name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1

- name: Install packages
run: npm install

- name: Build
run: npm run build

- name: Zip build
run: cd build && zip -r ../main.zip .

- name: Update Lambda code
run: aws lambda update-function-code --function-name=create-short-url --zip-file=fileb://main.zip

Push your code to GitHub and wait for the lambda to finalize.

Once you finish, you can send a POST request to your API root endpoint / with a URL to shorten it:

Example request:

{
"url": "https://www.google.com"
}

Example response:

{
"id": "c3e346ae-eaf3-4e5a-9042-5fbee8436514",
"shortUrl": "https://my_api.execute-api.my_region.amazonaws.com/live/J5EbXJ",
"url": "https://google.com"
}

Redirecting

Now, to implement our redirect lambda, let’s create a folder redirect inside our apps folder and then initialize our TypeScript project with:

npm init -y

Then let’s add typescript to our project with:

npm i --save typescript

And then add the following tsconfig.json file:

{
"compilerOptions": {
/* Language and Environment */
"target": "esnext",
/* Modules */
"module": "nodenext",
"rootDir": "src",
"moduleResolution": "nodenext",
"resolveJsonModule": true,
/* Emit */
"outDir": "build",
"newLine": "lf",
/* Interop Constraints */
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
/* Type Checking */
"strict": true,
"noImplicitAny": true,
"noImplicitThis": true,
/* Completeness */
"skipLibCheck": true
}
}

Now let’s add the libraries we need for our project

npm i --save @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
npm i -D @types/aws-lambda @types/node copyfiles

Now, create a src folder and add an index.ts file with the following code:

import AWS from 'aws-sdk';
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';

const tableName = 'urls';
const redirectCodeParam = 'redirectCode';

export const handler = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
if (!event.pathParameters || !event.pathParameters[redirectCodeParam]) {
return {
statusCode: 400,
body: JSON.stringify({
message: 'Redirect code missing',
}),
};
}

let redirectCode: string = event.pathParameters[redirectCodeParam];

console.log('Processing request code ', redirectCode);

var client = new AWS.DynamoDB.DocumentClient();

try {
const dynamoResponse = await client
.query({
TableName: tableName,
IndexName: 'CodeIndex',
KeyConditionExpression: 'Code = :code',
ExpressionAttributeValues: {
':code': redirectCode,
},
})
.promise();

if (!dynamoResponse.Items || dynamoResponse.Items.length == 0) {
return {
statusCode: 404,
body: JSON.stringify({
message: 'URL not found',
}),
};
}

const url: string = dynamoResponse.Items[0].URL;

console.log('Redirecting code %s to URL %s', redirectCode, url);

return {
statusCode: 302,
headers: {
Location: url, // For simplicity, let's say the first is our expected URL
},
body: '',
};
} catch (e: any) {
console.log(e);

return {
statusCode: 500,
body: JSON.stringify({
message: e.message,
}),
};
}
};

This is a basic implementation for redirecting.

Note that here we are using the AWS SDK v2 instead of DynamoDB libraries (Javascript V3). The reason for that is that we’ll add DAX later, and the amazon-dax-client library doesn’t support integration with Javascript V3.

Now we just need to add our workflow to deploy our lambda by creating a .github/workflows/deploy-redirect-lambda.yml file:

name: Deploy Redirect Lambda
on:
push:
branches:
- main
paths:
- apps/redirect/**/*
- .github/workflows/deploy-redirect-lambda.yml

defaults:
run:
working-directory: apps/redirect/

jobs:
terraform:
name: 'Deploy Redirect Lambda'
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v3

- name: Setup NodeJS
uses: actions/setup-node@v4
with:
node-version: 20

- name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1

- name: Install packages
run: npm install

- name: Build
run: npm run build

- name: Zip build
run: cd build && zip -r ../main.zip .

- name: Update Lambda code
run: aws lambda update-function-code --function-name=redirect --zip-file=fileb://main.zip

Now push the code to GitHub, wait for the lambda to be deployed, and then you can paste the URL with the code in your browser to be redirected to a website. For example https://your_domain.com/live/kSjwRSn .

Adding cache

Now we need to add a cache to our implementation.

We’ll use DAX (DynamoDB Accelerator), a fully managed, highly available caching service built for DynamoDB.

Let’s start by creating our DAX cluster with Terraform.

Because DAX requires to be inside a VPC, for security reasons, we’ll need to define a VPC or use an existing one for that. I’ll use the default VPC.

Create a vpc.tf file in iac folder:

data "aws_subnets" "default_vpc" {
filter {
name = "vpc-id"
values = ["YOUR_VPC_ID"]
}
}

data "aws_security_group" "default_security_group" {
id = "YOUR_SECURITY_GROUP"
}

Here we are getting the default subnets and security group in our account.

The values can be found in your AWS account under the VPC console.

Now, let’s create our DAX cluster in a cache.tf file:

resource "aws_dax_cluster" "urls" {
cluster_name = "urls"
iam_role_arn = aws_iam_role.dax.arn
node_type = "dax.t2.small"
replication_factor = 1
security_group_ids = [data.aws_security_group.default_security_group.id]
subnet_group_name = "default"
}

resource "aws_iam_role" "dax" {
name = "urls-dax-role"
assume_role_policy = data.aws_iam_policy_document.assume_dax_role.json
inline_policy {
name = "DefaultPolicy"
policy = data.aws_iam_policy_document.allow_get_url.json
}
}

data "aws_iam_policy_document" "assume_dax_role" {
statement {
effect = "Allow"

principals {
type = "Service"
identifiers = ["dax.amazonaws.com"]
}

actions = [
"sts:AssumeRole",
]
}
}

data "aws_iam_policy_document" "allow_get_url" {
statement {
effect = "Allow"

actions = [
"dynamodb:DescribeTable",
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem",
"dynamodb:ConditionCheckItem"
]

resources = [
"${aws_dynamodb_table.urls.arn}/index/${local.codeIndex}",
"${aws_dynamodb_table.urls.arn}",
]
}
}

Here we are creating a DAX cluster and giving it the right to perform actions in our DynamoDB table.

Now, let’s modify our lambdas to access our DAX. In iam-policies.tf, let’s modify the policy allow_get_url_lambda so it gives access to DAX, instead of DynamoDB:

data "aws_iam_policy_document" "allow_get_url_lambda" {
statement {
effect = "Allow"

actions = [
"dax:Query",
]

resources = [
"${aws_dax_cluster.urls.arn}",
]
}
}

And now, let’s add our lambdas to our VPC and attach the default security group. In the main.tf file of our lambda module, let’s add VPC configuration support:

resource "aws_iam_role" "iam_for_lambda" {
name = "${var.name}-lambda-role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
inline_policy {
name = "DefaultPolicy"
policy = data.aws_iam_policy_document.policies.json
}
}

resource "aws_lambda_function" "lambda" {
filename = data.archive_file.lambda.output_path
function_name = var.name
role = aws_iam_role.iam_for_lambda.arn
handler = "index.handler"
runtime = "nodejs20.x"

dynamic "vpc_config" {
for_each = var.has_vpc ? [1] : []
content {
security_group_ids = var.security_group_ids
subnet_ids = var.subnet_ids
}
}

environment {
variables = var.environment_variables
}
}

Here we are leveraging dynamic blocks to only add the vpc_config block if we want to add VPC configuration.

Now in the variables.tf , let’s add our security_group_ids, subnet_ids, and has_vpc variables:

variable "name" {
description = "The name of the Lambda function"
type = string
nullable = false
}

variable "source_file_path" {
description = "The path to the source file code"
type = string
}

variable "policies" {
description = "The policies for this lambda."
type = list(string)
default = null
}

variable "environment_variables" {
description = "The lambdas environment variables."
type = map(string)
default = null
}

variable "has_vpc" {
description = "If a lambda function requires VPC configuration"
type = bool
default = false
}

variable "security_group_ids" {
description = "The security groups for this lambda."
type = set(string)
default = null
}

variable "subnet_ids" {
description = "The subnets for this lambda."
type = set(string)
default = null
}

And now, modify the IAM role policies to allow VPC configuration in the datasources.tf:

data "archive_file" "lambda" {
type = "zip"
source_file = var.source_file_path
output_path = "${var.name}_lambda_function_payload.zip"
}

data "aws_iam_policy_document" "assume_role" {

statement {
effect = "Allow"

principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}

actions = ["sts:AssumeRole"]
}
}

data "aws_iam_policy_document" "policies" {
override_policy_documents = var.policies

statement {
effect = "Allow"
sid = "LogToCloudwatch"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
]

resources = ["arn:aws:logs:*:*:*"]
}

statement {
effect = "Allow"
sid = "VPCPermissions"
actions = [
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:AttachNetworkInterface"
]

resources = ["*"]
}
}

Great! We have our module updated, now it is time to enable VPC on our Redirect lambda.

In the lambdas.tf, let’s add our default VPC and security group to our lambda:

module "redirect_lambda" {
source = "./modules/lambda"
name = "redirect"
source_file_path = "./init_code/index.mjs"
policies = [
data.aws_iam_policy_document.allow_get_url_lambda.json
]

environment_variables = {
DAX_ENDPOINT = aws_dax_cluster.urls.cluster_address
}

has_vpc = true
security_group_ids = [data.aws_security_group.default_security_group.id]
subnet_ids = data.aws_subnets.default_vpc.ids

depends_on = [aws_dax_cluster.urls]
}

And now, let’s update our Redirect lambda Typescript code to make use of DAX.

First, navigate to apps/redirect and add the amazon-dax-client library:

npm i --save amazon-dax-client

And then, in the index.ts file, use:

import AWS from 'aws-sdk';
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
// @ts-ignore
import AmazonDaxClient from 'amazon-dax-client';

const tableName = 'urls';
const redirectCodeParam = 'redirectCode';

const daxEndpoint = `dax://${process.env.DAX_ENDPOINT}`;

export const handler = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
if (!event.pathParameters || !event.pathParameters[redirectCodeParam]) {
return {
statusCode: 400,
body: JSON.stringify({
message: 'Redirect code missing',
}),
};
}

let redirectCode: string = event.pathParameters[redirectCodeParam];

console.log('Processing request code ', redirectCode);

// Create DAX Client
var dax = new AmazonDaxClient({
endpoints: [daxEndpoint!],
});

// Add DAX Client to DynamoDB
var client = new AWS.DynamoDB.DocumentClient({ service: dax });

try {
const dynamoResponse = await client
.query({
TableName: tableName,
IndexName: 'CodeIndex',
KeyConditionExpression: 'Code = :code',
ExpressionAttributeValues: {
':code': redirectCode,
},
})
.promise();

if (!dynamoResponse.Items || dynamoResponse.Items.length == 0) {
return {
statusCode: 404,
body: JSON.stringify({
message: 'URL not found',
}),
};
}

const url: string = dynamoResponse.Items[0].URL;

console.log('Redirecting code %s to URL %s', redirectCode, url);

return {
statusCode: 302,
headers: {
Location: url, // For simplicity, let's say the first is our expected URL
},
body: '',
};
} catch (e: any) {
console.log(e);

return {
statusCode: 500,
body: JSON.stringify({
message: e.message,
}),
};
}
};

Note the @ts-ignore above the amazon-dax-client import. This is necessary, if you are using Typescript because the @types/amazon-dax-client is not updated with the correct type definition, so we just tell Typescript to ignore it and consider it as any.

Now push your code to GitHub and wait for the build to finish.

It might take a while to run the Terraform code because the creation of the DAX cluster might take a while. If it fails, you can just re-run the build and it will succeed.

After the build succeeds, you can test the redirection endpoint again and you should get the same result, but now using DAX. You can verify this by checking the monitors in your DAX cluster:

Some DAX cluster metrics

Conclusion

In this story, you could go through each step on how to build a fully operational serverless URL shortener in AWS using Terraform.

You could see how powerful IaC is and how easy it can be to build up your infrastructure step by step.

Not only that, you could learn how to build a fully functional API using API Gateway, lambda functions, and DynamoDB.

Aside from that, you also learned how to easily integrate DAX (DynamoDB Accelerator) caching service in your lambda function to enable the caching of your DynamoDB table in a DAX cluster and use it in your lambda functions.

The source code for this project can be found here.

Happy coding 💻

--

--