How to Write Terraform Script

How to Write Terraform Script Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp that enables users to define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. Unlike traditional manual or script-based approaches to infrastructure management, Terraform allows teams to version-control, reuse, and automate infrastruct

Nov 6, 2025 - 10:19
Nov 6, 2025 - 10:19
 2

How to Write Terraform Script

Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp that enables users to define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. Unlike traditional manual or script-based approaches to infrastructure management, Terraform allows teams to version-control, reuse, and automate infrastructure deployments across multiple platformsincluding AWS, Azure, Google Cloud, DigitalOcean, and more. Writing a Terraform script is not merely about typing configuration syntax; its about designing scalable, secure, and repeatable systems that align with modern DevOps practices.

The importance of mastering Terraform scripting cannot be overstated in todays cloud-native environment. Organizations that adopt Terraform reduce configuration drift, accelerate deployment cycles, minimize human error, and improve auditability. Whether youre deploying a simple web server or orchestrating a global microservices architecture, Terraform scripts serve as the single source of truth for your infrastructure state. This guide will walk you through every essential step to write effective, maintainable, and production-ready Terraform scriptsfrom basic syntax to advanced patterns and real-world examples.

Step-by-Step Guide

Step 1: Understand Terraforms Core Concepts

Before writing your first Terraform script, its critical to grasp the foundational elements that power Terraforms functionality:

  • Providers: These are plugins that allow Terraform to interact with cloud platforms or APIs. For example, the aws provider enables management of AWS resources.
  • Resources: These represent infrastructure components such as virtual machines, storage buckets, networks, or firewalls. Each resource is defined by a type and a set of arguments.
  • Variables: These allow you to parameterize your configuration, making it reusable across environments (e.g., dev, staging, prod).
  • Outputs: These expose values from your infrastructure after its created, such as public IP addresses or endpoint URLs.
  • State: Terraform maintains a state file (typically terraform.tfstate) that tracks the real-world resources it manages. This file is essential for synchronizing configuration with actual infrastructure.
  • Modules: These are reusable collections of Terraform configurations that encapsulate complex infrastructure patterns, promoting code organization and reuse.

Understanding these concepts ensures you write scripts that are not only syntactically correct but architecturally sound.

Step 2: Install Terraform

To begin writing Terraform scripts, you must have Terraform installed on your local machine or CI/CD environment. Terraform supports Windows, macOS, and Linux.

Visit the official Terraform downloads page at https://developer.hashicorp.com/terraform/downloads and select the appropriate package for your OS. Alternatively, use a package manager:

On macOS with Homebrew:

brew install terraform

On Ubuntu/Debian:

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

Verify the installation:

terraform -version

You should see output similar to:

Terraform v1.8.5

on linux_amd64

Step 3: Set Up Your Working Directory

Create a dedicated directory for your Terraform project:

mkdir my-terraform-project

cd my-terraform-project

Inside this directory, create the following files:

  • main.tf Primary configuration file where most resources and providers are defined.
  • variables.tf Declares input variables used across the configuration.
  • outputs.tf Defines values to be displayed after execution.
  • terraform.tfvars (Optional) Provides values for variables without passing them on the command line.
  • provider.tf (Optional) Separates provider configuration for clarity.

This structure enhances readability and maintainability, especially as your project scales.

Step 4: Configure a Provider

Every Terraform script begins with a provider declaration. Providers authenticate Terraform with the target platform and define the API endpoints it will use.

For example, to configure AWS:

provider "aws" {

region = "us-east-1"

access_key = var.aws_access_key

secret_key = var.aws_secret_key

}

Alternatively, use AWS credentials via environment variables or the AWS CLI default profile to avoid hardcoding sensitive data:

provider "aws" {

region = "us-east-1"

}

Then set environment variables:

export AWS_ACCESS_KEY_ID=your_access_key

export AWS_SECRET_ACCESS_KEY=your_secret_key

export AWS_DEFAULT_REGION=us-east-1

For Azure:

provider "azurerm" {

features {}

subscription_id = var.azure_subscription_id

tenant_id = var.azure_tenant_id

client_id = var.azure_client_id

client_secret = var.azure_client_secret

}

For Google Cloud:

provider "google" {

project = var.gcp_project_id

region = var.gcp_region

credentials = file(var.gcp_credentials_path)

}

Always avoid hardcoding credentials in your source files. Use environment variables, secrets management tools, or IAM roles instead.

Step 5: Define Resources

Resources are the building blocks of your infrastructure. Each resource block defines a specific component and its configuration.

Example: Deploying an EC2 instance on AWS:

resource "aws_instance" "web_server" {

ami = "ami-0c55b159cbfafe1f0"

instance_type = "t2.micro"

tags = {

Name = "WebServer-Dev"

}

}

Here, aws_instance is the resource type, and web_server is the local name you assign. The ami (Amazon Machine Image) and instance_type are required arguments. Tags help with resource identification and cost allocation.

Another example: Creating an S3 bucket:

resource "aws_s3_bucket" "my_bucket" {

bucket = "my-unique-bucket-name-12345"

tags = {

Environment = "dev"

Owner = "dev-team"

}

}

Each resource type has its own set of required and optional arguments. Always consult the official provider documentation for the latest schema.

Step 6: Use Variables for Reusability

Hardcoding values like instance types, regions, or names makes your scripts inflexible. Variables allow you to parameterize configurations and reuse them across environments.

In variables.tf:

variable "instance_type" {

description = "The EC2 instance type to launch"

type = string

default = "t2.micro"

}

variable "region" {

description = "AWS region to deploy resources"

type = string

default = "us-east-1"

}

variable "project_name" {

description = "Name prefix for all resources"

type = string

default = "myapp"

}

In main.tf:

resource "aws_instance" "web_server" {

ami = "ami-0c55b159cbfafe1f0"

instance_type = var.instance_type

tags = {

Name = "${var.project_name}-web"

}

}

To override defaults, create a terraform.tfvars file:

instance_type = "t3.medium"

region = "us-west-2"

project_name = "myapp-prod"

Or pass values at runtime:

terraform apply -var="instance_type=t3.large" -var="region=eu-central-1"

Step 7: Define Outputs

Outputs expose important values from your infrastructure after its created. This is especially useful for retrieving dynamically generated values like public IPs or endpoint URLs.

In outputs.tf:

output "instance_public_ip" {

description = "Public IP address of the EC2 instance"

value = aws_instance.web_server.public_ip

}

output "s3_bucket_name" {

description = "Name of the created S3 bucket"

value = aws_s3_bucket.my_bucket.bucket

}

After running terraform apply, Terraform will display these values in the terminal output.

Step 8: Initialize and Apply

Before applying any configuration, initialize your working directory. This downloads the required provider plugins:

terraform init

This command scans your .tf files, identifies providers, and downloads the necessary plugins into a hidden .terraform directory.

Next, review your configuration plan:

terraform plan

The plan output shows what Terraform will create, modify, or destroy. Always review this before applying changes.

Finally, apply the configuration:

terraform apply

Terraform will prompt for confirmation. Type yes to proceed. Once complete, your infrastructure is live.

Step 9: Manage State and Remote Backend

By default, Terraform stores state locally in terraform.tfstate. This is acceptable for personal use but risky in teams.

For collaboration and reliability, use a remote backend like Amazon S3, Azure Storage, or HashiCorp Cloud Platform (HCP) Terraform:

terraform {

backend "s3" {

bucket = "my-terraform-state-bucket"

key = "prod/terraform.tfstate"

region = "us-east-1"

dynamodb_table = "terraform-locks"

encrypt = true

}

}

After adding the backend block, reinitialize:

terraform init

Terraform will migrate your local state to the remote backend. This ensures state is shared, locked during operations, and encrypted at rest.

Step 10: Use Modules for Reusability

As your infrastructure grows, duplicating code across projects becomes unmanageable. Terraform modules allow you to package configurations into reusable components.

Create a module directory:

mkdir modules/webserver

In modules/webserver/main.tf:

resource "aws_instance" "server" {

ami = var.ami

instance_type = var.instance_type

tags = {

Name = var.name

}

}

In modules/webserver/variables.tf:

variable "ami" {

type = string

}

variable "instance_type" {

type = string

}

variable "name" {

type = string

}

In modules/webserver/outputs.tf:

output "instance_id" {

value = aws_instance.server.id

}

In your root main.tf:

module "web_server" {

source = "./modules/webserver"

ami = "ami-0c55b159cbfafe1f0"

instance_type = "t2.micro"

name = "web-server-prod"

}

Modules promote clean architecture, reduce duplication, and enable team-wide standardization.

Best Practices

1. Never Commit Sensitive Data

Never store API keys, passwords, or certificates in version-controlled files. Use environment variables, AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault to inject secrets at runtime. Add terraform.tfvars, *.tfstate, and .terraform to your .gitignore file.

2. Use Version Control

Treat your Terraform code like application code. Use Git to track changes, review pull requests, and enforce code quality. Tag releases for auditability:

git tag v1.0.0

git push origin v1.0.0

3. Write Modular, Reusable Code

Break infrastructure into logical modules: networking, security groups, databases, compute. This makes your code easier to test, maintain, and share across teams.

4. Validate Before Applying

Always run terraform plan before terraform apply. This prevents unintended changes. In CI/CD pipelines, use terraform plan -out=tfplan to generate a plan file and validate it before execution.

5. Use Terraform Linting Tools

Use terraform fmt to auto-format your code for consistency:

terraform fmt -recursive

Use checkov or tfsec to scan for security misconfigurations:

tfsec .

6. Lock State with Remote Backend

Always use a remote backend with state locking (e.g., S3 + DynamoDB). This prevents concurrent modifications that can corrupt state.

7. Document Your Code

Add comments and descriptions in your variables and outputs. Use README.md files in each module to explain usage, dependencies, and assumptions.

8. Test in Non-Production First

Use separate workspaces or directories for dev, staging, and prod. Use terraform workspace to manage multiple states within the same configuration:

terraform workspace new dev

terraform workspace select dev

terraform apply

9. Avoid Hardcoding IDs and Names

Use variables, data sources, and dynamic expressions instead. For example, retrieve an AMI dynamically:

data "aws_ami" "ubuntu" {

most_recent = true

filter {

name = "name"

values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]

}

filter {

name = "virtualization-type"

values = ["hvm"]

} owners = ["099720109477"]

Canonical

}

resource "aws_instance" "web" {

ami = data.aws_ami.ubuntu.id

...

}

10. Adopt a Naming Convention

Use consistent naming for resources and variables. For example:

  • Resources: aws_instance.web_server
  • Variables: instance_type
  • Outputs: instance_public_ip

This improves readability and reduces cognitive load for team members.

Tools and Resources

Official Documentation

The Terraform Language Documentation is the most authoritative source for syntax, functions, and provider details. Bookmark it.

Provider Registry

Visit the Terraform Registry to discover official and community-supported providers. Each provider page includes examples, required arguments, and version compatibility.

IDE Support

Use editors with Terraform syntax highlighting and linting:

  • Visual Studio Code with the HashiCorp Terraform extension
  • IntelliJ IDEA with the Terraform plugin
  • Sublime Text with Terraform syntax packages

These tools offer auto-completion, error detection, and formatting support.

Linting and Security Scanning Tools

  • tfsec: Static analysis tool for security best practices.
  • checkov: Scans for compliance and security misconfigurations.
  • terrascan: Policy-as-code scanner for IaC.
  • terraform validate: Checks syntax and configuration validity.

Integrate these into your CI/CD pipeline to catch issues before deployment.

CI/CD Integration

Automate Terraform workflows using:

  • GitHub Actions: Run terraform plan on PRs.
  • GitLab CI: Deploy using Terraform in pipelines.
  • CircleCI: Use orbs for Terraform tasks.
  • Argo CD: For GitOps-style infrastructure deployment.

Example GitHub Actions workflow:

name: Terraform Plan & Apply

on:

push:

branches: [ main ]

jobs:

terraform:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Setup Terraform

uses: hashicorp/setup-terraform@v3

- name: Terraform Init

run: terraform init

- name: Terraform Plan

run: terraform plan

- name: Terraform Apply

if: github.ref == 'refs/heads/main'

run: terraform apply -auto-approve

Learning Resources

  • HashiCorp Learn: Interactive tutorials at https://learn.hashicorp.com/terraform
  • Udemy: Terraform for Beginners by Stephane Maarek
  • YouTube: TechWorld with Nanas Terraform playlist
  • Books: Terraform Up & Running by Yevgeniy Brikman

Real Examples

Example 1: Deploy a Simple Web Server with S3 Static Hosting

This example creates an S3 bucket for static website hosting and an IAM role with minimal permissions.

main.tf:

provider "aws" {

region = "us-east-1"

}

resource "aws_s3_bucket" "website" {

bucket = "my-static-website-2024"

website {

index_document = "index.html"

error_document = "error.html"

}

tags = {

Name = "StaticWebsite"

}

}

resource "aws_s3_bucket_public_access_block" "public_access" {

bucket = aws_s3_bucket.website.id

block_public_acls = false

block_public_policy = false

ignore_public_acls = false

restrict_public_buckets = false

}

resource "aws_s3_bucket_acl" "website_acl" {

bucket = aws_s3_bucket.website.id

acl = "public-read"

}

resource "aws_iam_role" "s3_role" {

name = "s3-static-hosting-role"

assume_role_policy = jsonencode({

Version = "2012-10-17"

Statement = [

{

Action = "sts:AssumeRole"

Effect = "Allow"

Principal = {

Service = "s3.amazonaws.com"

}

}

]

})

}

resource "aws_iam_role_policy_attachment" "s3_policy" {

role = aws_iam_role.s3_role.name

policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"

}

outputs.tf:

output "website_url" {

value = aws_s3_bucket.website.website_endpoint

}

After applying, youll get a URL like my-static-website-2024.s3-website-us-east-1.amazonaws.com where you can upload your HTML files.

Example 2: Multi-Tier Architecture with VPC, Subnets, and RDS

This example deploys a secure, scalable architecture with public and private subnets, an RDS database, and an EC2 instance in a private subnet.

main.tf:

provider "aws" {

region = "us-east-1"

}

VPC

resource "aws_vpc" "main" {

cidr_block = "10.0.0.0/16"

tags = {

Name = "main-vpc"

}

}

Public Subnets

resource "aws_subnet" "public_1" {

vpc_id = aws_vpc.main.id

cidr_block = "10.0.1.0/24"

availability_zone = "us-east-1a"

map_public_ip_on_launch = true

tags = {

Name = "public-subnet-1"

}

}

resource "aws_subnet" "public_2" {

vpc_id = aws_vpc.main.id

cidr_block = "10.0.2.0/24"

availability_zone = "us-east-1b"

map_public_ip_on_launch = true

tags = {

Name = "public-subnet-2"

}

}

Private Subnets

resource "aws_subnet" "private_1" {

vpc_id = aws_vpc.main.id

cidr_block = "10.0.3.0/24"

availability_zone = "us-east-1a"

tags = {

Name = "private-subnet-1"

}

}

resource "aws_subnet" "private_2" {

vpc_id = aws_vpc.main.id

cidr_block = "10.0.4.0/24"

availability_zone = "us-east-1b"

tags = {

Name = "private-subnet-2"

}

}

Internet Gateway

resource "aws_internet_gateway" "igw" {

vpc_id = aws_vpc.main.id

tags = {

Name = "main-igw"

}

}

Public Route Table

resource "aws_route_table" "public" {

vpc_id = aws_vpc.main.id

route {

cidr_block = "0.0.0.0/0"

gateway_id = aws_internet_gateway.igw.id

}

tags = {

Name = "public-route-table"

}

}

Associate Public Subnets

resource "aws_route_table_association" "public_1" {

subnet_id = aws_subnet.public_1.id

route_table_id = aws_route_table.public.id

}

resource "aws_route_table_association" "public_2" {

subnet_id = aws_subnet.public_2.id

route_table_id = aws_route_table.public.id

}

NAT Gateway (for private subnets)

resource "aws_eip" "nat" {

vpc = true

}

resource "aws_nat_gateway" "nat" {

allocation_id = aws_eip.nat.id

subnet_id = aws_subnet.public_1.id

tags = {

Name = "nat-gateway"

}

}

Private Route Table

resource "aws_route_table" "private" {

vpc_id = aws_vpc.main.id

route {

cidr_block = "0.0.0.0/0"

nat_gateway_id = aws_nat_gateway.nat.id

}

tags = {

Name = "private-route-table"

}

}

Associate Private Subnets

resource "aws_route_table_association" "private_1" {

subnet_id = aws_subnet.private_1.id

route_table_id = aws_route_table.private.id

}

resource "aws_route_table_association" "private_2" {

subnet_id = aws_subnet.private_2.id

route_table_id = aws_route_table.private.id

}

Security Group for Web Server

resource "aws_security_group" "web_sg" {

name = "web-sg"

description = "Allow HTTP and SSH"

vpc_id = aws_vpc.main.id

ingress {

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

tags = {

Name = "web-sg"

}

}

Security Group for RDS

resource "aws_security_group" "db_sg" {

name = "db-sg"

description = "Allow MySQL from web servers"

vpc_id = aws_vpc.main.id

ingress {

from_port = 3306

to_port = 3306

protocol = "tcp"

security_groups = [aws_security_group.web_sg.id]

}

egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

tags = {

Name = "db-sg"

}

}

RDS MySQL Instance

resource "aws_db_instance" "main" {

allocated_storage = 20

engine = "mysql"

engine_version = "8.0"

instance_class = "db.t3.micro"

name = "myapp_db"

username = "admin"

password = "secure_password_123"

db_subnet_group_name = aws_db_subnet_group.main.name

vpc_security_group_ids = [aws_security_group.db_sg.id]

skip_final_snapshot = true

tags = {

Name = "myapp-db"

}

}

DB Subnet Group

resource "aws_db_subnet_group" "main" {

name = "myapp-db-subnet-group"

subnet_ids = [aws_subnet.private_1.id, aws_subnet.private_2.id]

tags = {

Name = "myapp-db-subnet-group"

}

}

EC2 Instance in Private Subnet

resource "aws_instance" "web_app" {

ami = "ami-0c55b159cbfafe1f0"

instance_type = "t3.micro"

subnet_id = aws_subnet.private_1.id

vpc_security_group_ids = [aws_security_group.web_sg.id]

tags = {

Name = "web-app-server"

}

}

This example demonstrates how Terraform can orchestrate complex, multi-resource architectures with proper isolation, security, and scalability.

FAQs

What is the difference between Terraform and CloudFormation?

Terraform is cloud-agnostic and supports multiple providers (AWS, Azure, GCP, etc.) with a consistent syntax. AWS CloudFormation is specific to AWS and uses YAML or JSON. Terraforms state management and module system are more mature, while CloudFormation integrates natively with AWS services like IAM and Lambda.

Can Terraform manage on-premises infrastructure?

Yes. Terraform supports providers for VMware, OpenStack, Nutanix, and even bare-metal servers via Ansible or IPMI. Its not limited to public clouds.

How do I roll back a Terraform deployment?

Terraform doesnt have a built-in rollback. However, you can:

  • Use version control to revert to a previous configuration.
  • Use terraform apply -target=resource.name to modify specific resources.
  • Use state backups and remote backends to restore a prior state.

Is Terraform state file secure?

By default, the local state file is not encrypted. Always use a remote backend with encryption (e.g., S3 with SSE) and enable state locking. Never commit it to version control.

Can I use Terraform without cloud providers?

Yes. Terraform can manage DNS records, Kubernetes clusters, Docker containers, or even network devices using appropriate providers like dns, kubernetes, or opennms.

How do I handle secrets in Terraform?

Never hardcode secrets. Use:

  • Environment variables
  • HashiCorp Vault
  • AWS Secrets Manager
  • Azure Key Vault
  • External data sources

What happens if I delete the terraform.tfstate file?

Terraform loses track of the infrastructure it manages. Running terraform apply afterward will attempt to recreate all resources, potentially causing conflicts or downtime. Always back up your state file and use remote backends.

How do I update Terraform versions?

Update the Terraform binary using your package manager or download the new version from HashiCorp. Then run terraform init to upgrade provider plugins. Always test upgrades in a non-production environment first.

Conclusion

Writing Terraform scripts is more than learning syntaxits about adopting a disciplined, scalable approach to infrastructure management. By following the steps outlined in this guidefrom setting up providers and defining resources to leveraging modules, remote backends, and CI/CD pipelinesyou empower your team to deploy infrastructure with speed, consistency, and confidence.

As cloud environments grow in complexity, the ability to codify infrastructure becomes a competitive advantage. Terraform provides the tools to automate, audit, and iterate on your infrastructure like software. Whether youre managing a single server or a global distributed system, well-written Terraform scripts are the foundation of modern DevOps.

Start small. Build in modules. Test rigorously. Automate everything. And most importantlynever stop learning. The landscape of cloud infrastructure evolves rapidly, and Terraform remains at the forefront of innovation. With this guide as your foundation, youre now equipped to write Terraform scripts that are not just functional, but exemplary.