How to Use Terraform Modules
How to Use Terraform Modules Terraform modules are reusable, self-contained packages of Terraform configurations that encapsulate infrastructure logic and can be shared across multiple projects. They are one of the most powerful features of Terraform, enabling teams to write infrastructure as code (IaC) in a scalable, maintainable, and consistent way. Whether you're managing a small development en
How to Use Terraform Modules
Terraform modules are reusable, self-contained packages of Terraform configurations that encapsulate infrastructure logic and can be shared across multiple projects. They are one of the most powerful features of Terraform, enabling teams to write infrastructure as code (IaC) in a scalable, maintainable, and consistent way. Whether you're managing a small development environment or a large multi-cloud production architecture, Terraform modules help reduce duplication, enforce standards, and accelerate deployment cycles. This guide provides a comprehensive, step-by-step walkthrough on how to use Terraform modules effectivelyfrom creation and consumption to advanced patterns and real-world best practices. By the end of this tutorial, youll understand not only how to use modules, but why they are essential for modern infrastructure automation.
Step-by-Step Guide
Understanding Terraform Modules
Before diving into implementation, its critical to understand what a Terraform module is and how it differs from standalone configuration files. A module is a directory containing one or more .tf files that define resources, variables, outputs, and sometimes local values and data sources. Unlike a root module (the main configuration you run with terraform apply), a module is designed to be called from another configuration. Think of it like a function in programming: you define inputs (arguments), perform operations (provision resources), and return outputs (values).
Modules promote the DRY (Dont Repeat Yourself) principle. Instead of copying and pasting the same AWS VPC, EC2 instance, or Kubernetes cluster configuration across multiple environments (dev, staging, prod), you write it once in a module and reuse it with different parameters. This reduces errors, improves consistency, and simplifies updates.
Creating Your First Module
To create a Terraform module, follow these steps:
- Create a new directory for your module, e.g.,
modules/vpc. - Inside this directory, create a file named
main.tf. - Define the resources your module will provision. For example, heres a simple VPC module:
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = var.vpc_name
}
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.vpc_name}-igw"
}
}
resource "aws_subnet" "public" {
count = length(var.public_subnets)
cidr_block = var.public_subnets[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.vpc_name}-public-${count.index + 1}"
}
}
Next, define the inputs your module expects in a file called variables.tf:
variable "vpc_cidr" {
description = "The CIDR block for the VPC"
type = string
}
variable "vpc_name" {
description = "Name tag for the VPC and related resources"
type = string
}
variable "public_subnets" {
description = "List of CIDR blocks for public subnets"
type = list(string)
}
Finally, define outputs that other modules or the root configuration can consume in outputs.tf:
output "vpc_id" {
value = aws_vpc.main.id
}
output "public_subnet_ids" {
value = aws_subnet.public[*].id
}
At this point, your module is ready. It has inputs, outputs, and resources. No root configuration has been created yetthis module is designed to be reused.
Calling a Module from the Root Configuration
To use your newly created module, navigate to your root Terraform project directory (typically the top-level folder containing your main.tf). Create or edit main.tf and add a module block:
module "vpc" {
source = "./modules/vpc"
vpc_cidr = "10.0.0.0/16"
vpc_name = "my-app-vpc"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
The source argument tells Terraform where to find the module. Here, its a local path. Terraform will automatically read all .tf files in that directory and treat them as a single module.
After defining the module, run:
terraform initinitializes the backend and downloads any modules referenced.terraform planpreviews the infrastructure changes.terraform applyprovisions the resources.
Terraform will now create the VPC, Internet Gateway, and public subnets defined in your module. The beauty is that you can now call this same module from another project or environment with different valuessay, for staging or productionwithout duplicating code.
Using Remote Modules
While local modules are great for internal reuse within a single codebase, remote modules allow teams to share infrastructure components across multiple organizations or repositories. Terraform supports modules from:
- GitHub repositories
- GitLab, Bitbucket, or other Git providers
- The Terraform Registry (public or private)
- Amazon S3 buckets
- HTTP URLs
To use a module from the Terraform Registry, change the source in your module block:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0"
name = "my-app-vpc"
cidr = "10.0.0.0/16"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
azs = ["us-west-2a", "us-west-2b", "us-west-2c"]
}
This example uses the popular terraform-aws-modules/vpc/aws module from the official Terraform Registry. Terraform automatically downloads the module and caches it in the .terraform directory. Version pinning (via version) ensures reproducibility and prevents unexpected breaking changes.
Module Versioning and Locking
Version control is critical when using remote modules. Without it, a simple terraform apply could pull in a new version of a module that introduces breaking changes. Always specify a version constraint:
version = "3.14.0"exact versionversion = "~> 3.14.0"allows patch updates (e.g., 3.14.1, 3.14.9)version = ">= 3.14.0, allows minor updates within a major version
When you run terraform init, Terraform generates a .terraform.lock.hcl file that locks module versions. Commit this file to version control to ensure every team member uses the exact same module versions.
Module Dependencies and Nested Modules
Modules can depend on other modules. For example, you might have a module for VPC, another for security groups, and a third for EC2 instances. The EC2 module can depend on outputs from the VPC and security group modules.
Heres how you chain them:
module "vpc" {
source = "./modules/vpc"
... inputs
}
module "security_groups" {
source = "./modules/security-groups"
vpc_id = module.vpc.vpc_id
}
module "ec2_instances" {
source = "./modules/ec2"
subnet_ids = module.vpc.public_subnet_ids
security_group_ids = module.security_groups.security_group_ids
}
This creates a dependency graph where Terraform provisions the VPC first, then the security groups, then the EC2 instances. Terraform automatically resolves these dependencies and applies resources in the correct order.
Using Data Sources Inside Modules
Modules can also consume data sources to retrieve information from the current cloud environment. For example, a module might need to find an existing AMI or subnet. Heres an example inside a module:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
subnet_id = var.subnet_id
tags = {
Name = var.instance_name
}
}
This allows your module to be more dynamic and context-aware without hardcoding values. Data sources are evaluated during the planning phase, so theyre safe to use in reusable modules.
Best Practices
Use Meaningful Module Names
Module names should be descriptive and follow a consistent naming convention. Avoid generic names like aws or infra. Instead, use names like:
modules/vpcmodules/rds-postgresqlmodules/eks-clustermodules/lambda-function
This makes it easy for other engineers to discover and understand the purpose of each module.
Document Your Modules
Every module should include a README.md file that explains:
- What the module does
- Required and optional inputs
- Outputs provided
- Example usage
- Version compatibility
- Known limitations
Good documentation reduces onboarding time and prevents misuse. Consider using tools like terraform-docs to auto-generate documentation from your variables.tf and outputs.tf files.
Pin Module Versions
As mentioned earlier, always specify a version for remote modules. Never use source = "github.com/..." without a version tag or branch. Unpinned modules lead to unpredictable deployments and are a major source of production incidents.
Separate Environments Using Workspaces or Separate Repositories
While Terraform workspaces allow you to manage multiple environments (dev, staging, prod) within a single configuration, they are not recommended for complex infrastructures. Instead, use separate directories or repositories for each environment, each calling the same modules with different variables.
Example structure:
infra/
??? environments/
? ??? dev/
? ? ??? main.tf
? ? ??? terraform.tfvars
? ??? staging/
? ? ??? main.tf
? ? ??? terraform.tfvars
? ??? prod/
? ??? main.tf
? ??? terraform.tfvars
??? modules/
? ??? vpc/
? ??? rds/
? ??? ecs/
??? variables.tf
This approach isolates state, reduces risk of cross-environment changes, and allows for different access controls and CI/CD pipelines per environment.
Use Input Validation and Default Values
Prevent invalid configurations by validating inputs. Use the validation block in your variables:
variable "instance_type" {
description = "EC2 instance type"
type = string
validation {
condition = contains([
"t3.micro", "t3.small", "t3.medium", "m5.large", "c5.xlarge"
], var.instance_type)
error_message = "Invalid instance type. Allowed values: t3.micro, t3.small, t3.medium, m5.large, c5.xlarge."
}
}
Provide sensible defaults where appropriate:
variable "enable_monitoring" {
description = "Whether to enable detailed CloudWatch monitoring"
type = bool
default = true
}
This makes modules easier to use and reduces the chance of human error.
Avoid Hardcoding Provider Configurations
Modules should not define provider blocks unless absolutely necessary. Providers should be configured at the root level. This allows the calling configuration to control authentication, region, and other provider settings.
Bad (inside module):
provider "aws" {
region = "us-west-2"
}
Good (in root):
provider "aws" {
region = var.aws_region
}
Pass region and credentials through variables if needed.
Test Modules in Isolation
Use tools like terratest (Go-based) or pytest with terraform-exec to write automated tests for your modules. Test scenarios should include:
- Successful provisioning
- Invalid input rejection
- Output correctness
- Idempotency (running apply twice produces no changes)
Testing modules in isolation ensures they behave correctly before being consumed in production environments.
Follow Semantic Versioning
If youre publishing your own modules (especially internally), follow semantic versioning: MAJOR.MINOR.PATCH.
- MAJOR: Breaking changes (renamed inputs, removed resources)
- MINOR: New features (added outputs, new optional parameters)
- PATCH: Bug fixes, documentation updates
This helps consumers understand the risk of upgrading.
Tools and Resources
Terraform Registry
The Terraform Registry is the largest public collection of community and official modules. It includes verified modules from HashiCorp and top contributors for AWS, Azure, GCP, Kubernetes, and more. Always prefer modules from the registry over random GitHub repositoriesthey are tested, versioned, and documented.
terraform-docs
terraform-docs is a command-line tool that auto-generates documentation for Terraform modules from their variables and outputs. Install it via Homebrew:
brew install terraform-docs
Then run in your module directory:
terraform-docs markdown . > README.md
This generates a clean, structured README that reflects your current configuration.
Checkov and Terrascan
Security scanning tools like Checkov and Terrascan can scan your modules for misconfigurations, compliance violations, and security risks. Integrate them into your CI pipeline to catch issues before deployment.
Git Repositories and Private Registries
For enterprise teams, consider hosting modules in a private Git repository (e.g., GitHub Enterprise, GitLab) and using Terraforms private registry feature. Terraform Cloud and Terraform Enterprise offer private module registries with access controls, versioning, and audit trails.
Visual Studio Code Extensions
Use the official Terraform extension by HashiCorp for VS Code. It provides syntax highlighting, auto-completion, linting, and module navigation. Other useful extensions include Terraform Snippets and Diff for comparing state changes.
CI/CD Integration
Integrate Terraform modules into your CI/CD pipeline using tools like GitHub Actions, GitLab CI, or Jenkins. Key steps include:
- Run
terraform fmtto enforce formatting - Run
terraform validateto check syntax - Run
terraform planin a non-destructive mode - Run security scans (Checkov, Terrascan)
- Require approvals before apply
This ensures code quality and reduces risk in production.
Open Source Modules to Study
Study well-maintained modules to learn best practices:
- terraform-aws-modules/vpc
- terraform-aws-modules/eks
- terraform-google-modules/kubernetes-engine
- aztfmod/caf (Azure CAF)
These modules demonstrate modular design, extensive documentation, testing, and versioning.
Real Examples
Example 1: Deploying a Secure Web Application
Lets build a real-world example: a secure web application on AWS using modules.
Module structure:
web-app/
??? environments/
? ??? prod/
? ??? main.tf
? ??? variables.tf
? ??? terraform.tfvars
??? modules/
? ??? vpc/
? ? ??? main.tf
? ? ??? variables.tf
? ? ??? outputs.tf
? ??? security-groups/
? ? ??? main.tf
? ? ??? variables.tf
? ? ??? outputs.tf
? ??? alb/
? ? ??? main.tf
? ? ??? variables.tf
? ? ??? outputs.tf
? ??? ec2-autoscale/
? ??? main.tf
? ??? variables.tf
? ??? outputs.tf
??? providers.tf
modules/vpc/main.tf creates a VPC with public/private subnets and NAT gateways.
modules/security-groups/main.tf defines security groups for ALB (port 80/443), EC2 (port 22, 80), and RDS (port 5432).
modules/alb/main.tf creates an Application Load Balancer, target groups, and listeners.
modules/ec2-autoscale/main.tf creates an Auto Scaling Group with launch template, health checks, and scaling policies.
environments/prod/main.tf:
provider "aws" {
region = "us-west-2"
}
module "vpc" {
source = "../modules/vpc"
name = "web-app-prod"
cidr = "10.10.0.0/16"
}
module "security_groups" {
source = "../modules/security-groups"
vpc_id = module.vpc.vpc_id
}
module "alb" {
source = "../modules/alb"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.public_subnet_ids
security_group_id = module.security_groups.alb_sg_id
}
module "ec2_autoscale" {
source = "../modules/ec2-autoscale"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnet_ids
security_group_id = module.security_groups.ec2_sg_id
target_group_arn = module.alb.target_group_arn
instance_type = "t3.medium"
min_size = 2
max_size = 6
}
This structure allows you to deploy the same application stack to staging by changing only the terraform.tfvars file with different values for name, CIDR, instance type, and size.
Example 2: Multi-Cloud Kubernetes Cluster
Suppose you need to deploy a Kubernetes cluster on both AWS and Azure. Instead of writing two separate configurations, create a module that accepts a provider variable:
modules/k8s-cluster/main.tf:
variable "cloud_provider" {
type = string
default = "aws"
}
locals {
provider = var.cloud_provider == "aws" ? "aws" : "azurerm"
}
module "k8s" {
source = "./${local.provider}"
Pass common inputs
cluster_name = var.cluster_name
node_count = var.node_count
node_size = var.node_size
}
Then create subdirectories modules/k8s-cluster/aws and modules/k8s-cluster/azurerm with provider-specific configurations. This pattern enables true multi-cloud reusability.
Example 3: Reusable Database Module
Create a module for PostgreSQL RDS that supports both dev (single-node) and prod (multi-AZ) configurations:
modules/rds-postgresql/variables.tf:
variable "environment" {
type = string
default = "dev"
validation {
condition = contains(["dev", "prod"], var.environment)
error_message = "Environment must be 'dev' or 'prod'."
}
}
variable "instance_class" {
type = string
default = "db.t3.micro"
}
variable "allocated_storage" {
type = number
default = 20
}
modules/rds-postgresql/main.tf:
resource "aws_db_instance" "primary" {
allocated_storage = var.allocated_storage
engine = "postgres"
engine_version = "15.3"
instance_class = var.instance_class
db_name = "myapp"
username = "admin"
password = var.db_password
skip_final_snapshot = var.environment == "dev"
publicly_accessible = var.environment == "dev"
multi_az = var.environment == "prod"
vpc_security_group_ids = [var.security_group_id]
subnet_group_name = var.db_subnet_group_name
}
Now, in your root configuration:
module "dev_db" {
source = "../modules/rds-postgresql"
environment = "dev"
security_group_id = module.vpc.db_sg_id
db_subnet_group_name = module.vpc.db_subnet_group_name
}
module "prod_db" {
source = "../modules/rds-postgresql"
environment = "prod"
instance_class = "db.m6g.large"
allocated_storage = 100
security_group_id = module.vpc.db_sg_id
db_subnet_group_name = module.vpc.db_subnet_group_name
}
One module, two very different deploymentsclean, scalable, and maintainable.
FAQs
What is the difference between a Terraform module and a provider?
A provider is a plugin that Terraform uses to interact with a cloud platform (e.g., AWS, Azure, GCP). It handles authentication, API calls, and resource types. A module is a collection of Terraform configurations that define infrastructure components (e.g., VPC, EC2, RDS). You use providers to connect to clouds; you use modules to build infrastructure on top of them.
Can I use modules from private GitHub repositories?
Yes. Use the Git URL format: source = "github.com/your-org/your-module?ref=v1.2.3". Terraform supports SSH and HTTPS authentication. For HTTPS, ensure your CI/CD system has a personal access token with read access to the repository.
How do I update a module to a new version?
Update the version constraint in your module block, then run terraform init. Terraform will download the new version. Always run terraform plan first to review changes before applying. If breaking changes are introduced, update your input variables accordingly.
Do modules support state management?
Modules do not manage state independently. All state is managed by the root module. When you call a module, its resources are tracked in the same state file as the root configuration. This ensures consistency and prevents conflicts.
Can I use modules with Terraform Cloud or Enterprise?
Yes. Terraform Cloud and Enterprise offer private module registries where you can publish, version, and control access to internal modules. You can also use remote state and run Terraform in a managed environment with policy enforcement and audit logs.
What happens if a module is deleted from the registry?
If youre using a versioned module (e.g., version = "1.2.0"), Terraform will continue to use the cached version. The module is downloaded once and stored locally. However, if you reinitialize without a lock file or clear your cache, you may lose access. Always pin versions and consider hosting critical modules internally.
How do I test if my module is working correctly?
Use terraform plan to validate syntax and resource creation. Use terratest to write Go-based tests that spin up real infrastructure and verify outputs. For example, test that an EC2 instance is running or that an S3 bucket has the correct policy.
Should I put all my infrastructure in one module?
No. Large monolithic modules are hard to maintain, test, and reuse. Break your infrastructure into logical, single-responsibility modules: one for networking, one for compute, one for databases, etc. This promotes modularity and reduces coupling.
Conclusion
Terraform modules are not just a conveniencethey are a foundational element of scalable, maintainable, and enterprise-grade infrastructure as code. By encapsulating reusable patterns, enforcing consistency, and reducing duplication, modules empower teams to deploy infrastructure faster, with fewer errors and greater confidence. This guide has walked you through creating, consuming, versioning, and testing modules, as well as applying industry best practices and real-world examples.
As your infrastructure grows, so should your use of modules. Start smallrefactor a repetitive VPC or EC2 configuration into a module today. Then expand to databases, load balancers, and Kubernetes clusters. Over time, youll build a library of trusted, tested components that become the backbone of your entire infrastructure.
Remember: the goal of Terraform is not just to provision resources, but to make infrastructure predictable, repeatable, and maintainable. Modules are the key to achieving that goal at scale. Embrace them, document them, test them, and share themand your team will thank you.