How to Automate Aws With Terraform

How to Automate AWS with Terraform Modern cloud infrastructure demands speed, consistency, and repeatability. Manual configuration of Amazon Web Services (AWS) resources is error-prone, time-consuming, and unsustainable at scale. That’s where Infrastructure as Code (IaC) comes in—and Terraform stands at the forefront of this revolution. Automating AWS with Terraform enables teams to define, provis

Nov 6, 2025 - 10:19
Nov 6, 2025 - 10:19
 1

How to Automate AWS with Terraform

Modern cloud infrastructure demands speed, consistency, and repeatability. Manual configuration of Amazon Web Services (AWS) resources is error-prone, time-consuming, and unsustainable at scale. Thats where Infrastructure as Code (IaC) comes inand Terraform stands at the forefront of this revolution. Automating AWS with Terraform enables teams to define, provision, and manage cloud resources using declarative configuration files, ensuring that environments are identical across development, testing, and production. This tutorial provides a comprehensive, step-by-step guide to mastering AWS automation with Terraform, covering everything from initial setup to advanced best practices, real-world examples, and essential tools. Whether youre a DevOps engineer, cloud architect, or developer looking to streamline your AWS workflows, this guide will equip you with the knowledge to implement scalable, secure, and maintainable infrastructure automation.

Step-by-Step Guide

Prerequisites and Setup

Before automating AWS with Terraform, ensure you have the following prerequisites in place:

  • An AWS account with programmatic access (Access Key ID and Secret Access Key)
  • Installed AWS CLI configured with credentials
  • Installed Terraform (version 1.5 or higher recommended)
  • A code editor (e.g., VS Code, Sublime Text, or JetBrains IDEs)
  • Basic understanding of JSON or HCL (HashiCorp Configuration Language)

To install Terraform, visit the official Terraform downloads page and follow the instructions for your operating system. On macOS, you can use Homebrew:

brew install terraform

On Linux, download the binary and move it to your PATH:

wget https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip

unzip terraform_1.5.7_linux_amd64.zip

sudo mv terraform /usr/local/bin/

Verify the installation:

terraform -version

Next, configure AWS credentials. You can do this in two ways:

  1. Using AWS CLI: Run aws configure and enter your Access Key ID, Secret Access Key, default region (e.g., us-east-1), and output format (json).
  2. Using environment variables: Export the following in your shell profile (.bashrc, .zshrc, etc.):
export AWS_ACCESS_KEY_ID=your_access_key

export AWS_SECRET_ACCESS_KEY=your_secret_key

export AWS_DEFAULT_REGION=us-east-1

After setup, youre ready to write your first Terraform configuration.

Creating Your First Terraform Configuration

Create a new directory for your project:

mkdir aws-terraform-demo

cd aws-terraform-demo

Create a file named main.tf and define your AWS provider:

provider "aws" {

region = "us-east-1"

}

resource "aws_s3_bucket" "example_bucket" {

bucket = "my-unique-s3-bucket-name-12345"

}

resource "aws_s3_bucket_public_access_block" "example_block" {

bucket = aws_s3_bucket.example_bucket.id

block_public_acls = true

block_public_policy = true

ignore_public_acls = true

restrict_public_buckets = true

}

This configuration does two things:

  • Declares the AWS provider with region us-east-1
  • Creates an S3 bucket with a globally unique name
  • Applies public access blocking to comply with security best practices

Save the file and initialize Terraform:

terraform init

This command downloads the AWS provider plugin and sets up the backend (local by default). Next, review the execution plan:

terraform plan

Youll see output showing that Terraform will create one S3 bucket and one access block. If the plan looks correct, apply it:

terraform apply

Terraform will prompt for confirmation. Type yes and press Enter. Within seconds, your S3 bucket is created in AWS. You can verify this by logging into the AWS Console, navigating to S3, and locating your bucket.

Managing Multiple Environments with Workspaces

As your infrastructure grows, managing separate environmentsdevelopment, staging, and productionbecomes critical. Terraform workspaces allow you to maintain multiple state files within the same configuration.

Create workspaces:

terraform workspace new dev

terraform workspace new staging

terraform workspace new prod

List available workspaces:

terraform workspace list

Switch to the dev workspace:

terraform workspace select dev

Now modify your main.tf to use dynamic bucket names based on the workspace:

resource "aws_s3_bucket" "example_bucket" {

bucket = "my-app-${terraform.workspace}-bucket"

}

When you run terraform apply in the dev workspace, the bucket name becomes my-app-dev-bucket. In production, it becomes my-app-prod-bucket. This eliminates naming conflicts and enables isolated, environment-specific infrastructure.

Using Modules for Reusability

Repetition in infrastructure code leads to maintenance nightmares. Terraform modules allow you to package and reuse configurations. Create a module for a standard VPC:

Inside your project directory, create a folder named modules/vpc. Inside it, create main.tf:

resource "aws_vpc" "main" {

cidr_block = var.cidr_block

tags = {

Name = "${var.name}-vpc"

}

}

resource "aws_internet_gateway" "igw" {

vpc_id = aws_vpc.main.id

tags = {

Name = "${var.name}-igw"

}

}

resource "aws_subnet" "public" {

count = length(var.public_subnets)

cidr_block = var.public_subnets[count.index]

availability_zone = data.aws_availability_zones.available.names[count.index]

vpc_id = aws_vpc.main.id

tags = {

Name = "${var.name}-public-subnet-${count.index + 1}"

}

}

resource "aws_route_table" "public" {

vpc_id = aws_vpc.main.id

route {

cidr_block = "0.0.0.0/0"

gateway_id = aws_internet_gateway.igw.id

}

tags = {

Name = "${var.name}-public-rt"

}

}

resource "aws_route_table_association" "public" {

count = length(aws_subnet.public)

subnet_id = aws_subnet.public[count.index].id

route_table_id = aws_route_table.public.id

}

data "aws_availability_zones" "available" {}

variable "name" {

description = "Name prefix for resources"

type = string

}

variable "cidr_block" {

description = "CIDR block for VPC"

type = string

}

variable "public_subnets" {

description = "List of CIDR blocks for public subnets"

type = list(string)

}

Now, in your root main.tf, call the module:

module "vpc" {

source = "./modules/vpc"

name = "myapp"

cidr_block = "10.0.0.0/16"

public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]

}

Run terraform plan and terraform apply again. Terraform now provisions a full VPC with public subnets and an internet gateway using a reusable module. This approach allows you to deploy identical VPCs across multiple projects or regions with minimal duplication.

Adding Security Groups and EC2 Instances

Now lets extend our infrastructure to include a web server. Add the following to main.tf:

resource "aws_security_group" "web_server" {

name = "web-server-sg"

description = "Allow HTTP and SSH access"

vpc_id = module.vpc.vpc_id

ingress {

description = "SSH from anywhere"

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

ingress {

description = "HTTP from anywhere"

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

}

resource "aws_instance" "web" { ami = "ami-0c55b159cbfafe1f0"

Amazon Linux 2

instance_type = "t2.micro"

security_groups = [aws_security_group.web_server.name]

subnet_id = module.vpc.public_subnets[0]

tags = {

Name = "web-server"

}

user_data =

!/bin/bash

yum update -y

yum install -y httpd

systemctl start httpd

systemctl enable httpd echo "

Hello from Terraform!

" > /var/www/html/index.html

EOF

}

This configuration:

  • Creates a security group allowing SSH (port 22) and HTTP (port 80)
  • Launches an EC2 t2.micro instance using Amazon Linux 2
  • Uses the first public subnet from the VPC module
  • Deploys a simple web page via user data script

After applying, you can access the web server by copying the public IP from the AWS Console or using:

terraform output -raw public_ip

Then paste the IP into your browser. You should see Hello from Terraform!

Output Variables and State Management

To make your infrastructure outputs accessible, define output variables in outputs.tf:

output "vpc_id" {

value = module.vpc.vpc_id

}

output "public_subnets" {

value = module.vpc.public_subnets

}

output "web_server_public_ip" {

value = aws_instance.web.public_ip

}

output "web_server_url" {

value = "http://${aws_instance.web.public_ip}"

}

After applying, run:

terraform output

This displays all outputs, including the URL to your web server. Terraform automatically stores state in a local terraform.tfstate file. For team environments, use a remote backend like S3:

backend "s3" {

bucket = "my-terraform-state-bucket"

key = "prod/terraform.tfstate"

region = "us-east-1"

dynamodb_table = "terraform-locks"

}

Enable state locking with DynamoDB to prevent concurrent modifications:

terraform init -backend-config="dynamodb_table=terraform-locks"

Best Practices

Use Version Control for All Infrastructure Code

Terraform configurations should be treated like application code. Store all .tf files in a Git repository. Use branches for feature development and pull requests for code reviews. This ensures auditability, collaboration, and rollback capability. Never commit sensitive data like API keys or secrets. Use environment variables or AWS Secrets Manager for credentials.

Separate Environments with Workspaces or Repositories

While workspaces are convenient for small teams, large organizations benefit from separate repositories per environment (e.g., infra-dev, infra-prod). This enforces stricter access controls and reduces the risk of accidental production changes. Use tools like Terraform Cloud or Atlantis to automate deployments based on pull requests.

Implement Module Versioning

Always pin module versions in your root configuration:

module "vpc" {

source = "terraform-aws-modules/vpc/aws"

version = "3.14.0"

...

}

Using versioned modules from the Terraform Registry ensures stability and allows you to upgrade intentionally. Avoid using source = "./modules/vpc" in production unless youre certain of the modules immutability.

Apply the Principle of Least Privilege

Never use root AWS credentials with Terraform. Create an IAM user with minimal permissions. Use AWS IAM policies to restrict Terraform to only the services and actions it needs. For example:

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Action": [

"ec2:Describe*",

"ec2:Create*",

"ec2:Delete*",

"s3:CreateBucket",

"s3:PutBucketPolicy",

"s3:DeleteBucket"

],

"Resource": "*"

}

]

}

Use AWS IAM Roles for Service Accounts (IRSA) in Kubernetes environments or assume roles in CI/CD pipelines for temporary, secure access.

Use Sentinel or Open Policy Agent (OPA) for Policy Enforcement

Terraform Cloud and Enterprise support Sentinel policies to enforce compliance rules. For example, you can block any Terraform plan that attempts to create an S3 bucket without public access blocking. Similarly, OPA can be integrated into CI/CD pipelines to validate configurations before apply.

Regularly Run Terraform Plan and Validate

Always run terraform plan before terraform apply. Review the execution plan carefully. Look for unexpected resource creation, modification, or destruction. Use tools like tfsec or checkov to scan for security misconfigurations in your code before applying.

Use Remote State with Locking

Local state files are a single point of failure. Use S3 + DynamoDB for remote state with locking. This prevents multiple users from applying changes simultaneously, which could corrupt state or cause inconsistent infrastructure.

Document Your Infrastructure

Include README files with each module or project. Document:

  • What resources are created
  • Required inputs and their expected values
  • Outputs and how to use them
  • Dependencies and prerequisites
  • Known limitations

This documentation becomes critical for onboarding new engineers and maintaining infrastructure over time.

Tools and Resources

Official Terraform Tools

  • Terraform CLI The core tool for writing, planning, and applying infrastructure. Available at developer.hashicorp.com/terraform
  • Terraform Registry A public repository of verified modules. Search for AWS modules at registry.terraform.io/namespaces/terraform-aws-modules
  • Terraform Cloud A hosted service for collaboration, state management, policy enforcement, and CI/CD integration. Offers free tier for small teams.
  • Terraform Validate A command to check syntax and configuration without touching real infrastructure: terraform validate

Security and Compliance Tools

  • tfsec A static analysis tool that scans Terraform code for security issues. Install via Go: go install github.com/aquasecurity/tfsec@latest
  • Checkov An open-source tool by Bridgecrew that scans for misconfigurations across multiple IaC tools, including Terraform. Supports custom policies.
  • Terrascan A policy-as-code scanner that supports over 300 rules for AWS, Azure, and GCP.

CI/CD Integration Tools

  • GitHub Actions Automate Terraform plans and applies on pull requests using community actions like hashicorp/setup-terraform
  • GitLab CI/CD Use Terraform in your .gitlab-ci.yml file with Docker images containing Terraform and AWS CLI
  • Atlantis An open-source tool that integrates with GitHub, GitLab, and Bitbucket to automate Terraform workflows via comments
  • Spacelift A modern IaC orchestration platform with built-in drift detection, policy controls, and stack dependencies

Visual Tools

  • Terraform Graph Generate visual diagrams of your infrastructure: terraform graph | dot -Tpng > graph.png
  • Diagrams.net Manually design infrastructure diagrams and sync them with Terraform state
  • Cloudcraft A commercial tool that auto-generates AWS architecture diagrams from Terraform state files

Learning Resources

  • HashiCorp Learn Free interactive tutorials on Terraform and AWS: learn.hashicorp.com/terraform
  • Udemy: Terraform for AWS Comprehensive video course by Stephane Maarek
  • GitHub: terraform-aws-modules The most popular collection of production-ready AWS modules: github.com/terraform-aws-modules
  • Reddit: r/Terraform Active community for troubleshooting and sharing best practices

Real Examples

Example 1: Automated WordPress Site on AWS

Heres a complete example of deploying a WordPress site using Terraform:

provider "aws" {

region = "us-east-1"

}

module "vpc" {

source = "terraform-aws-modules/vpc/aws"

name = "wordpress-vpc"

cidr = "10.0.0.0/16"

azs = ["us-east-1a", "us-east-1b"]

public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]

private_subnets = ["10.0.10.0/24", "10.0.11.0/24"]

enable_nat_gateway = true

single_nat_gateway = true

}

resource "aws_security_group" "wordpress" {

name = "wordpress-sg"

description = "Allow HTTP, HTTPS, and MySQL"

vpc_id = module.vpc.vpc_id

ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

ingress {

from_port = 443

to_port = 443

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

ingress {

from_port = 3306

to_port = 3306

protocol = "tcp"

cidr_blocks = [module.vpc.private_subnets[0]]

}

egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

}

resource "aws_db_instance" "wordpress_db" {

allocated_storage = 20

engine = "mysql"

engine_version = "8.0"

instance_class = "db.t3.micro"

name = "wordpress"

username = "admin"

password = "MySecurePass123!"

db_subnet_group_name = aws_db_subnet_group.wordpress.name

vpc_security_group_ids = [aws_security_group.wordpress.id]

skip_final_snapshot = true

}

resource "aws_db_subnet_group" "wordpress" {

name = "wordpress-subnet-group"

subnet_ids = module.vpc.private_subnets

tags = {

Name = "wordpress-db-subnet-group"

}

}

resource "aws_instance" "wordpress" { ami = "ami-0e59362466924221f"

Amazon Linux 2

instance_type = "t3.micro"

subnet_id = module.vpc.public_subnets[0]

security_groups = [aws_security_group.wordpress.name] key_name = "my-key-pair"

Ensure this key exists in AWS

user_data =

!/bin/bash

yum update -y

yum install -y httpd php php-mysqlnd

systemctl start httpd

systemctl enable httpd

cd /var/www/html

wget https://wordpress.org/latest.tar.gz

tar -xzf latest.tar.gz

mv wordpress/* .

rm -rf wordpress latest.tar.gz

chown -R apache:apache /var/www/html

EOF

tags = {

Name = "wordpress-server"

}

}

output "wordpress_url" {

value = "http://${aws_instance.wordpress.public_ip}"

}

This example creates a secure, multi-tier architecture:

  • VPC with public and private subnets
  • MySQL database in private subnet
  • WordPress server in public subnet
  • Only HTTP/HTTPS exposed to the internet
  • Database accessible only from the web server

Example 2: CI/CD Pipeline with GitHub Actions

Automate Terraform deployments using GitHub Actions. Create .github/workflows/terraform.yml:

name: Terraform Plan and Apply

on:

pull_request:

branches: [ main ]

jobs:

terraform:

name: Terraform

runs-on: ubuntu-latest

steps:

- name: Checkout

uses: actions/checkout@v3

- name: Setup Terraform

uses: hashicorp/setup-terraform@v2

- name: AWS Credentials

uses: aws-actions/configure-aws-credentials@v2

with:

aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

aws-region: us-east-1

- name: Terraform Init

run: terraform init

- name: Terraform Plan

run: terraform plan

id: plan

- name: Comment on PR

uses: thollander/actions-comment-pull-request@v1

with:

message: |

Terraform Plan:

${{ steps.plan.outputs.stdout }}

token: ${{ secrets.GITHUB_TOKEN }}

- name: Terraform Apply

if: github.event_name == 'push' && github.ref == 'refs/heads/main'

run: terraform apply -auto-approve

This workflow:

  • Runs on pull requests to the main branch
  • Runs terraform plan and comments the result on the PR
  • Only runs terraform apply on direct pushes to main
  • Uses secrets for AWS credentials

Developers can now review infrastructure changes before merging, ensuring safe, collaborative deployments.

Example 3: Auto-Scaling Web Application

Deploy a scalable web application with an Application Load Balancer (ALB) and Auto Scaling Group:

resource "aws_alb" "web" {

name = "web-alb"

internal = false

load_balancer_type = "application"

security_groups = [aws_security_group.alb.id]

subnets = module.vpc.public_subnets

tags = {

Name = "web-alb"

}

}

resource "aws_alb_target_group" "web" {

name = "web-tg"

port = 80

protocol = "HTTP"

vpc_id = module.vpc.vpc_id

health_check {

path = "/health"

interval = 30

timeout = 5

healthy_threshold = 2

unhealthy_threshold = 2

}

}

resource "aws_alb_listener" "web" {

load_balancer_arn = aws_alb.web.arn

port = "80"

protocol = "HTTP"

default_action {

type = "forward"

target_group_arn = aws_alb_target_group.web.arn

}

}

resource "aws_launch_configuration" "web" {

image_id = "ami-0c55b159cbfafe1f0"

instance_type = "t3.micro"

security_groups = [aws_security_group.web.id]

user_data =

!/bin/bash

yum update -y

yum install -y httpd

systemctl start httpd

systemctl enable httpd echo "

Auto-Scaled Web Server

" > /var/www/html/index.html

EOF

lifecycle {

create_before_destroy = true

}

}

resource "aws_autoscaling_group" "web" {

name = "web-asg"

launch_configuration = aws_launch_configuration.web.name

min_size = 2

max_size = 5

desired_capacity = 2

vpc_zone_identifier = module.vpc.public_subnets

tag {

key = "Name"

value = "web-server"

propagate_at_launch = true

}

health_check_type = "ELB"

}

This configuration ensures high availability: if one instance fails, the Auto Scaling Group replaces it. The ALB distributes traffic evenly across healthy instances. This is a production-grade pattern for web applications.

FAQs

What is Terraform and how does it differ from AWS CloudFormation?

Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp that supports multiple cloud providers, including AWS, Azure, GCP, and others. CloudFormation is AWSs proprietary IaC service. Terraform uses HCL (HashiCorp Configuration Language), which is more readable and flexible than CloudFormations JSON or YAML. Terraform also supports remote state management, modules, and a rich ecosystem of providers and tools. CloudFormation is tightly integrated with AWS services but lacks cross-cloud support.

Can Terraform manage existing AWS resources?

Yes, Terraform can import existing resources using the terraform import command. For example: terraform import aws_s3_bucket.example my-existing-bucket-name. After importing, Terraform will manage the resource as if it were created by Terraform. Always review the generated configuration and update your .tf files accordingly.

How do I handle secrets in Terraform?

Never hardcode secrets like passwords or API keys in Terraform files. Use environment variables, AWS Secrets Manager, or AWS SSM Parameter Store. In Terraform, reference them using data sources:

data "aws_secretsmanager_secret_version" "db_password" {

secret_id = "prod/database/password"

}

resource "aws_db_instance" "example" {

password = data.aws_secretsmanager_secret_version.db_password.secret_string

}

What happens if Terraform fails during apply?

Terraform is designed to be idempotent. If an apply fails, the state file reflects the last known good state. Run terraform plan to see what remains to be applied. Fix the error (e.g., permission issue, resource limit), then run terraform apply again. Terraform will attempt to complete only the remaining changes.

Is Terraform safe for production use?

Yes, when used with best practices: version control, remote state with locking, policy enforcement, CI/CD reviews, and least-privilege access. Many Fortune 500 companies use Terraform to manage their entire AWS infrastructure. Always test changes in non-production environments first.

How do I update Terraform versions?

Use the terraform version command to check your current version. To upgrade, download the new version from the official site and replace the binary. Always test new versions in a staging environment first. Terraform maintains backward compatibility for state files, but always backup your state before upgrading.

Can I use Terraform with Kubernetes on AWS?

Absolutely. Use the kubernetes provider to manage Kubernetes resources (deployments, services, config maps). Combine it with the aws provider to create EKS clusters:

module "eks" {

source = "terraform-aws-modules/eks/aws"

cluster_name = "my-eks-cluster"

cluster_version = "1.24"

subnets = module.vpc.private_subnets

vpc_id = module.vpc.vpc_id

node_groups = {

ng1 = {

desired_capacity = 2

max_capacity = 5

min_capacity = 2

instance_type = "t3.small"

}

}

}

Conclusion

Automating AWS with Terraform transforms infrastructure management from a manual, reactive process into a scalable, repeatable, and secure engineering discipline. By adopting Terraform, teams gain the ability to version control their cloud environments, collaborate effectively, enforce compliance, and deploy infrastructure with confidence. From simple S3 buckets to complex multi-region, auto-scaling architectures, Terraform provides the flexibility and power needed to meet modern cloud demands.

This guide has walked you through the entire lifecycle: from initial setup and writing your first configuration, to building reusable modules, enforcing security best practices, integrating with CI/CD, and deploying real-world applications. The examples provided serve as templates you can adapt to your own use cases.

The future of cloud infrastructure is code-driven. As AWS continues to evolve, so too must our methods of managing it. Terraform is not just a toolits a mindset. Embrace Infrastructure as Code, automate relentlessly, and build infrastructure that scales as fast as your business.

Start small. Test often. Document everything. And never stop learning. The next great cloud architecture begins with a single .tf file.