Terraform for Beginners: Provision AWS Infrastructure Step-by-Step
Infrastructure as Code (IaC) has become one of the most in-demand skills for DevOps and Cloud engineers. And when it comes to IaC tools, Terraform by HashiCorp is the clear industry standard. It lets you define, provision, and manage cloud infrastructure using simple configuration files — no clicking through AWS consoles, no manual setup.
In this comprehensive Terraform tutorial for beginners, you’ll go from zero to provisioning real AWS infrastructure, step by step. By the end, you’ll understand how Terraform works and have a solid foundation to build on.
What is Terraform and Why Should You Learn It?
Terraform is an open-source Infrastructure as Code tool that allows you to define cloud and on-premises infrastructure in human-readable configuration files. Instead of manually creating resources in the AWS console, you write code that describes your desired state — and Terraform makes it happen.
Why Terraform over other tools?
| Tool | Provider | Language | Multi-cloud? |
|---|---|---|---|
| Terraform | HashiCorp | HCL | Yes (AWS, Azure, GCP, 3,000+ providers) |
| CloudFormation | AWS | JSON/YAML | No (AWS only) |
| Pulumi | Pulumi Corp | Python/Go/JS | Yes |
| Ansible | Red Hat | YAML | Yes (but config mgmt focused) |
Terraform’s key advantages:
- Multi-cloud support — manage AWS, Azure, and GCP with one tool
- Declarative syntax — you describe what you want, not how to build it
- State management — Terraform tracks what it has built
- Huge community — thousands of pre-built modules on the Terraform Registry
- Job market demand — Terraform is listed in the majority of DevOps/Cloud job postings
How Terraform Works: Core Concepts
Before writing any code, you need to understand the key concepts that make Terraform tick.
The Terraform workflow
Terraform follows a simple three-step cycle:
- Write — Define your infrastructure in
.tfconfiguration files using HCL (HashiCorp Configuration Language) - Plan — Run
terraform planto preview what Terraform will create, change, or destroy - Apply — Run
terraform applyto execute the plan and provision your infrastructure
Key terminology
- Provider — A plugin that lets Terraform talk to a specific platform (e.g., AWS, Azure, GCP). Providers expose resources and data sources.
- Resource — A single piece of infrastructure, like an EC2 instance, S3 bucket, or VPC.
- State file — A JSON file (
terraform.tfstate) that records the current state of your infrastructure. - Module — A reusable package of Terraform configurations. Think of it as a function for infrastructure.
- Data source — Lets you read information from existing infrastructure (rather than creating something new).
- Variable — An input parameter that makes your configurations reusable.
- Output — A value exported from your Terraform configuration, like an instance’s public IP.
Prerequisites
Before starting this Terraform tutorial, make sure you have the following:
- An AWS account (free tier is sufficient)
- An IAM user with programmatic access (Access Key ID + Secret Access Key)
- A computer running Linux, macOS, or Windows
- Basic familiarity with the command line/terminal
- A code editor (VS Code with the HashiCorp Terraform extension recommended)
AdministratorAccess policy for learning purposes. For production, always follow least-privilege principles.Step 1 — Install Terraform
On macOS (using Homebrew)
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
On Ubuntu / Debian Linux
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt-get install terraform
On Windows
Download the installer from the official Terraform Downloads page, extract the binary, and add it to your system PATH. Alternatively, use the Chocolatey package manager:
choco install terraform
Verify the installation
terraform version
You should see output like:
Terraform v1.7.0
on linux_amd64
Step 2 — Configure AWS Credentials
Terraform needs AWS credentials to create resources on your behalf. The most common approach is using the AWS CLI.
Install the AWS CLI
# macOS
brew install awscli
# Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Configure your credentials
aws configure
You’ll be prompted to enter:
AWS Access Key ID [None]: YOUR_ACCESS_KEY_ID
AWS Secret Access Key [None]: YOUR_SECRET_ACCESS_KEY
Default region name [None]: us-east-1
Default output format [None]: json
This stores your credentials in ~/.aws/credentials. Terraform automatically picks them up from this location.
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. This is useful in CI/CD pipelines.Step 3 — Write Your First Terraform Configuration
Create a new directory for your project and open it in VS Code:
mkdir terraform-aws-demo
cd terraform-aws-demo
code .
Create a file called main.tf. This is the entry point for your Terraform configuration.
Declare the AWS provider
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.5.0"
}
provider "aws" {
region = "us-east-1"
}
Let’s break this down:
- The
terraformblock sets the minimum Terraform version and declares which providers the configuration needs. - The
provider "aws"block configures the AWS provider and specifies which region to use. ~> 5.0means “use version 5.x but not 6.0” — this is a version constraint that prevents breaking changes.
Initialize Terraform
Run this command in your project directory to download the AWS provider:
terraform init
You’ll see output like:
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 5.0"...
- Installing hashicorp/aws v5.31.0...
- Installed hashicorp/aws v5.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
This creates a .terraform directory with the downloaded provider binary.
Step 4 — Core Terraform Commands
Before creating real resources, here are the essential Terraform commands you’ll use constantly:
| Command | What It Does |
|---|---|
terraform init |
Initializes the working directory, downloads providers and modules |
terraform plan |
Previews what changes will be made (dry run) |
terraform apply |
Applies the planned changes and provisions infrastructure |
terraform destroy |
Destroys all resources managed by the configuration |
terraform fmt |
Formats your .tf files according to HCL style conventions |
terraform validate |
Validates the syntax of your configuration files |
terraform show |
Displays the current state or a saved plan |
terraform output |
Displays the values of output variables |
terraform state list |
Lists all resources tracked in the state file |
terraform plan before terraform apply. Review the plan carefully — Terraform will show you exactly what it’s going to create, change, or destroy. Never skip this step.Step 5 — Provision an EC2 Instance
Now let’s create something real. Add the following to your main.tf file to provision an EC2 instance:
# Get the latest Amazon Linux 2023 AMI
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["al2023-ami-*-x86_64"]
}
}
# Create a security group
resource "aws_security_group" "web_sg" {
name = "web-security-group"
description = "Allow HTTP and SSH traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow SSH"
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow HTTP"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all outbound traffic"
}
tags = {
Name = "web-sg"
Environment = "dev"
ManagedBy = "Terraform"
}
}
# Provision the EC2 instance
resource "aws_instance" "web_server" {
ami = data.aws_ami.amazon_linux.id
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.web_sg.id]
user_data = <<-EOF
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "
Hello from TechwithAssem! Deployed with Terraform.
” > /var/www/html/index.html EOF tags = { Name = “web-server” Environment = “dev” ManagedBy = “Terraform” } }
A few important things happening here:
- The
data "aws_ami"block queries AWS to find the most recent Amazon Linux 2023 AMI — so you never hardcode an AMI ID. - The
aws_security_groupresource creates a firewall rule allowing SSH (port 22) and HTTP (port 80). - The
user_datascript runs on first boot, installing Apache and serving a simple web page. - The
ManagedBy = "Terraform"tag is a best practice — it makes it obvious in the AWS console which resources Terraform controls.
Preview and apply
terraform plan
terraform apply
Type yes when prompted. Terraform will provision the EC2 instance in about 30–60 seconds. You’ll see the instance ID in the output.
Step 6 — Create an S3 Bucket
Add the following to your main.tf:
resource "aws_s3_bucket" "app_bucket" {
bucket = "techwithassem-demo-bucket-${random_id.suffix.hex}"
tags = {
Name = "app-storage"
Environment = "dev"
ManagedBy = "Terraform"
}
}
resource "random_id" "suffix" {
byte_length = 4
}
resource "aws_s3_bucket_versioning" "app_bucket_versioning" {
bucket = aws_s3_bucket.app_bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "app_bucket_encryption" {
bucket = aws_s3_bucket.app_bucket.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "app_bucket_block" {
bucket = aws_s3_bucket.app_bucket.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
Note: Because S3 bucket names are globally unique, we append a random suffix to avoid naming conflicts. This is a common pattern in Terraform.
You’ll also need to declare the random provider in your terraform block:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
}
Re-initialize and apply:
terraform init
terraform apply
Step 7 — Set Up a VPC with Subnets
In real-world environments, you always deploy resources inside a custom VPC rather than the default one. Here’s how to create a basic VPC setup:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "main-vpc"
ManagedBy = "Terraform"
}
}
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "public-subnet"
ManagedBy = "Terraform"
}
}
resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "private-subnet"
ManagedBy = "Terraform"
}
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
ManagedBy = "Terraform"
}
}
resource "aws_route_table" "public_rt" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "public-route-table"
ManagedBy = "Terraform"
}
}
resource "aws_route_table_association" "public_rta" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.public_rt.id
}
This creates a production-ready network layout with a public subnet (internet-accessible) and a private subnet (for databases or backend services).
Step 8 — Use Variables and Outputs
Hardcoding values in your configuration is fine for learning, but in real projects you should use variables. Create a variables.tf file:
variable "aws_region" {
description = "The AWS region to deploy resources"
type = string
default = "us-east-1"
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "environment" {
description = "Environment name (dev, staging, prod)"
type = string
default = "dev"
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
Now update your main.tf to reference these variables:
provider "aws" {
region = var.aws_region
}
resource "aws_instance" "web_server" {
instance_type = var.instance_type
# ... rest of config
tags = {
Environment = var.environment
}
}
Create an outputs.tf file
Outputs let you retrieve useful information after terraform apply:
output "instance_public_ip" {
description = "Public IP address of the EC2 instance"
value = aws_instance.web_server.public_ip
}
output "instance_id" {
description = "EC2 instance ID"
value = aws_instance.web_server.id
}
output "s3_bucket_name" {
description = "Name of the S3 bucket"
value = aws_s3_bucket.app_bucket.bucket
}
output "vpc_id" {
description = "ID of the VPC"
value = aws_vpc.main.id
}
After applying, you can retrieve any output value with:
terraform output instance_public_ip
Override variables at runtime
# Pass a variable on the command line
terraform apply -var="environment=prod"
# Use a .tfvars file (recommended for multiple variables)
terraform apply -var-file="prod.tfvars"
Step 9 — Understand Terraform State
The state file (terraform.tfstate) is one of the most important — and most misunderstood — concepts in Terraform. It’s how Terraform knows what infrastructure already exists so it can calculate what changes need to be made.
Local state vs remote state
By default, Terraform stores state locally in terraform.tfstate. This is fine for personal projects, but for team environments you must use remote state to avoid conflicts.
To store state in an S3 bucket (the standard approach for AWS), add a backend block:
terraform {
backend "s3" {
bucket = "your-terraform-state-bucket"
key = "project/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock" # For state locking
}
}
*.tfstate and *.tfstate.backup to your .gitignore.Useful state commands
# List all resources in state
terraform state list
# Show details for a specific resource
terraform state show aws_instance.web_server
# Remove a resource from state (without destroying it)
terraform state rm aws_instance.web_server
# Import an existing resource into state
terraform import aws_instance.web_server i-0abc123def456
Best Practices for Beginners
Once you’re comfortable with the basics, apply these practices to write production-quality Terraform:
- Use modules for reusable components. Instead of repeating the same VPC code across multiple configurations, package it as a module and call it with different variables.
- Always tag your resources. Tags like
Environment,ManagedBy, andOwnerare essential for cost tracking and identifying what Terraform manages in your AWS account. - Use remote state with locking. Store your
.tfstatein S3 and use a DynamoDB table for state locking. This prevents two people from runningterraform applysimultaneously and corrupting the state. - Pin provider versions. Always use version constraints like
~> 5.0to avoid unexpected breaking changes when providers release new versions. - Separate environments with workspaces or directory structure. Keep
dev/,staging/, andprod/in separate directories with their own state files. - Run
terraform fmtandterraform validatebefore every commit. Better yet, integrate them into a pre-commit hook or CI/CD pipeline. - Review the plan output carefully. Before applying, look for any unexpected
-/+(replacement) actions. Replacing a resource means it will be destroyed and recreated — potentially causing downtime. - Use
terraform destroyafter learning. Don’t forget to clean up your AWS resources when you’re done experimenting to avoid unexpected charges.
Frequently Asked Questions
Is Terraform free to use?
Yes, the open-source Terraform CLI is completely free. HashiCorp also offers Terraform Cloud (free tier available) and Terraform Enterprise for teams who need remote state management, collaboration, and governance features.
Do I need to know AWS before learning Terraform?
You should have basic AWS knowledge — understanding what EC2, S3, VPC, and IAM are will help a lot. You don’t need to be an expert, but knowing what you’re creating makes it much easier to learn Terraform.
What’s the difference between Terraform and Ansible?
Terraform is for provisioning infrastructure (creating and managing cloud resources). Ansible is primarily for configuration management (installing software, configuring OS settings on existing servers). They’re often used together: Terraform provisions the servers, Ansible configures them.
How long does it take to learn Terraform?
You can be productive with Terraform in 2–3 weeks of consistent practice. The HCL syntax is simple. Most of the learning curve comes from understanding the cloud resources you’re managing, not Terraform itself.
What certifications are available for Terraform?
HashiCorp offers the HashiCorp Certified: Terraform Associate certification. It’s a strong addition to any DevOps or Cloud resume and validates that you can work with Terraform in real-world scenarios.
Conclusion
In this Terraform tutorial for beginners, you’ve covered a lot of ground:
- What Terraform is and how it compares to other IaC tools
- Core concepts: providers, resources, state, variables, and outputs
- Installing Terraform and configuring AWS credentials
- Provisioning an EC2 instance, S3 bucket, and VPC step by step
- Using variables and outputs for reusable, maintainable configurations
- Understanding Terraform state and remote backends
- Best practices for working with Terraform in real projects
Terraform is a skill that compounds quickly. Once you understand the fundamentals, you can manage infrastructure of any complexity — from a single server to a full multi-region Kubernetes platform. The best way to cement your knowledge is to keep building: try adding an RDS database, an Application Load Balancer, or an Auto Scaling Group to the project you built today.



