Terraform from Scratch: HCL, Provider and the Plan-Apply-Destroy Cycle
In 2026, the Infrastructure as Code market has reached such maturity that ignoring Terraform is equivalent to to write SQL without knowing the transactions: technically possible, but professionally risky. HashiCorp's Terraform today holds the 32.8% market share among IaC tools, with over 4 million monthly provider installations on registry.terraform.io. This guide takes you from scratch to a working setup on AWS and Azure, with a solid understanding of the Plan-Apply-Destroy cycle which will avoid costly errors in production.
The question I hear most often from developers approaching IaC is: “Because I don't just use bash scripts or the AWS console?”. The answer is in the concept of declarative infrastructure: you don't tell Terraform as create the resources, you say which state you want to achieve. Terraform is concerned with understanding the sequence of operations, managing dependencies between resources and ensuring idempotency.
What You Will Learn
- Install Terraform and configure tfenv to handle multiple versions
- HCL syntax: blocks, attributes, expressions and built-in functions
- Configure the AWS provider and Azure provider with secure authentication
- The complete life cycle:
terraform init,plan,apply,destroy - Understanding the state file: what it is, where it lives, why it should never be modified manually
- Structure a Terraform project from the start in a scalable way
- Variables, locals and output: organize the configuration in a professional way
Prerequisites and Installation
Before writing the first line of HCL, let's setup the environment professionally. The most common beginner mistake is installing Terraform globally with a package manager and then find yourself stuck on a specific version. The correct solution is tfenv, the version manager for Terraform (equivalent of nvm for Node.js).
# Installa tfenv (macOS/Linux)
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
echo 'export PATH="$HOME/.tfenv/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Installa l'ultima versione stabile di Terraform
tfenv install latest
tfenv use latest
# Verifica installazione
terraform version
# Output: Terraform v1.9.x on linux_amd64
# Installa una versione specifica se il progetto la richiede
tfenv install 1.8.5
tfenv use 1.8.5
# Lista versioni installate
tfenv list
For Windows, the recommended approach in 2026 and WSL2 with Ubuntu or Chocolatey:
# Windows con Chocolatey
choco install terraform
# Oppure scarica il binario da releases.hashicorp.com
# e aggiungilo al PATH manualmente
Also install the VS Code plugin HashiCorp Terraform (id: hashicorp.terraform):
offers syntax highlighting, autocompletion, go-to-definition for providers and integration
with terraform fmt. It is the most robust IDE experience available for HCL.
HCL: HashiCorp Configuration Language
HCL (HashiCorp Configuration Language) is a specifically designed declarative language
for the configuration of infrastructures. It is not a general-purpose programming language:
has no imperative loop (use for_each e count), has no exceptions,
It has no asynchronous I/O. This is a features, not a limit: simplicity reduces the
accidental complexity of configurations.
The fundamental file of every Terraform project e main.tf. The common convention
and separate the configuration into these files:
progetto-terraform/
├── main.tf # Risorse principali
├── variables.tf # Dichiarazione variabili
├── outputs.tf # Output values
├── versions.tf # Required providers e terraform block
├── locals.tf # Valori locali computati
└── terraform.tfvars # Valori delle variabili (non committare se contiene segreti)
Here is the basic structure of HCL with the basic block types:
# versions.tf — blocco terraform: configura il comportamento del core
terraform {
required_version = ">= 1.8.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.100"
}
}
}
# variables.tf — dichiarazione variabili di input
variable "environment" {
description = "Nome dell'ambiente (dev, staging, prod)"
type = string
default = "dev"
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "L'ambiente deve essere dev, staging o prod."
}
}
variable "aws_region" {
description = "AWS Region dove deployare le risorse"
type = string
default = "eu-west-1"
}
variable "instance_count" {
description = "Numero di istanze EC2 da creare"
type = number
default = 2
}
variable "tags" {
description = "Tag comuni da applicare a tutte le risorse"
type = map(string)
default = {
ManagedBy = "Terraform"
Project = "demo"
}
}
# locals.tf — valori computati a partire da variabili
locals {
# Naming convention uniforme
name_prefix = "${var.environment}-${var.tags["Project"]}"
# Merge tag comuni con tag specifici per risorsa
common_tags = merge(var.tags, {
Environment = var.environment
CreatedAt = timestamp()
})
}
Configure the AWS Provider
I provider are plugins that allow Terraform to interact with external APIs: AWS, Azure, GCP, Kubernetes, GitHub, Datadog — virtually every service has a Terraform provider. The AWS provider is the most mature, with over 1200 resource types supported.
For AWS authentication, never use hardcoded access key in HCL code. The AWS provider supports the following modes, in order of priority:
# main.tf — configurazione provider AWS
provider "aws" {
region = var.aws_region
# NON fare questo in produzione:
# access_key = "AKIA..." # MAI
# secret_key = "..." # MAI
# Metodo consigliato 1: assume role (best practice per CI/CD)
# assume_role {
# role_arn = "arn:aws:iam::123456789:role/TerraformRole"
# }
# I tag di default vengono applicati a tutte le risorse
default_tags {
tags = local.common_tags
}
}
# Configurazione autenticazione (fuori da main.tf)
# La best practice e usare il file ~/.aws/credentials oppure
# variabili di ambiente:
# export AWS_ACCESS_KEY_ID="..."
# export AWS_SECRET_ACCESS_KEY="..."
# export AWS_DEFAULT_REGION="eu-west-1"
#
# In produzione: IAM Instance Profile o OIDC con GitHub Actions
Now let's create the first AWS resources: a VPC with public and private subnets:
# main.tf — prime risorse AWS
# Data source: recupera gli AZ disponibili nella region
data "aws_availability_zones" "available" {
state = "available"
}
# VPC principale
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${local.name_prefix}-vpc"
}
}
# Subnet pubblica (una per AZ)
resource "aws_subnet" "public" {
count = min(length(data.aws_availability_zones.available.names), 2)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${local.name_prefix}-public-${count.index + 1}"
Tier = "Public"
}
}
# Subnet privata
resource "aws_subnet" "private" {
count = min(length(data.aws_availability_zones.available.names), 2)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index + 10)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${local.name_prefix}-private-${count.index + 1}"
Tier = "Private"
}
}
# Internet Gateway per le subnet pubbliche
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${local.name_prefix}-igw"
}
}
# Route table pubblica
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${local.name_prefix}-rt-public"
}
}
# Associazione route table - subnet pubblica
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
# outputs.tf — valori di output
output "vpc_id" {
description = "ID della VPC creata"
value = aws_vpc.main.id
}
output "public_subnet_ids" {
description = "IDs delle subnet pubbliche"
value = aws_subnet.public[*].id
}
output "private_subnet_ids" {
description = "IDs delle subnet private"
value = aws_subnet.private[*].id
}
The Plan-Apply-Destroy Cycle
The Terraform lifecycle consists of four fundamental commands. Understand them thoroughly It is essential to avoid the most common accidents in production.
terraform init
# Inizializza il working directory:
# - Scarica i provider (plugin binaries)
# - Configura il backend per lo state
# - Installa i moduli referenziati
terraform init
# Output atteso:
# Initializing the backend...
# Initializing provider plugins...
# - Finding hashicorp/aws versions matching "~> 5.0"...
# - Installing hashicorp/aws v5.54.1...
# Terraform has been successfully initialized!
# Upgrade dei provider all'ultima versione compatibile
terraform init -upgrade
# Ricrea il .terraform.lock.hcl (file di lock dei provider)
terraform providers lock -platform=linux_amd64 -platform=darwin_amd64
The .terraform.lock.hcl file
The file .terraform.lock.hcl must be committed to the repository.
It ensures that all team members and the CI pipeline use exactly the same one
providers version, avoiding "works on my machine" for infrastructure.
The directory .terraform/ instead it goes into the .gitignore.
terraform plan
# Genera il piano di esecuzione (DRY RUN)
terraform plan
# Salva il piano in un file binario (consigliato per CI/CD)
terraform plan -out=tfplan
# Specifica le variabili inline
terraform plan -var="environment=staging" -var="instance_count=3"
# Usa un file di variabili
terraform plan -var-file="staging.tfvars"
# Output tipico di plan:
# Terraform will perform the following actions:
#
# # aws_vpc.main will be created
# + resource "aws_vpc" "main" {
# + arn = (known after apply)
# + cidr_block = "10.0.0.0/16"
# + enable_dns_hostnames = true
# ...
# }
#
# Plan: 8 to add, 0 to change, 0 to destroy.
The symbol + indicates creation, ~ in-place editing,
-/+ destroy and recreate (force downtime), - destruction.
Always read the marked blocks carefully -/+:
they mean the resource must be recreated, which often causes downtime or data loss.
terraform apply
# Applica le modifiche (chiede conferma interattiva)
terraform apply
# Applica il piano salvato (NON chiede conferma - usare in CI/CD)
terraform apply tfplan
# Auto-approve per ambienti non critici (attento in prod!)
terraform apply -auto-approve
# Apply con variabili
terraform apply -var="environment=prod" -var-file="prod.tfvars"
# Output dopo apply:
# Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
#
# Outputs:
# vpc_id = "vpc-0123456789abcdef0"
# public_subnet_ids = [
# "subnet-0abc123def456gh78",
# "subnet-0def456abc789ij01",
# ]
terraform destroy
# Distrugge TUTTE le risorse gestite dallo state corrente
terraform destroy
# Destroy con auto-approve (ATTENZIONE: irreversibile)
terraform destroy -auto-approve
# Distrugge solo una risorsa specifica (con grande cautela)
terraform destroy -target=aws_subnet.public[0]
# In produzione, preferisci eliminare le risorse dal codice
# e fare terraform apply: Terraform le distruggerà correttamente
Anti-Pattern: terraform destroy in Production
Never run terraform destroy without a recovery plan.
The best practice is to remove the resource from the HCL code and run terraform apply:
you get the same result but with an explicit plan that you can review first.
Some organizations block the ability to execute destroy at the IAM policy level
on production environments.
Configure the Azure Provider
The provider azurerm manages Azure resources. Unlike AWS, Azure uses Service Principal or Managed Identity for authentication.
# versions.tf aggiornato con provider Azure
terraform {
required_version = ">= 1.8.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.100"
}
}
}
# Configurazione provider Azure
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = true
}
key_vault {
purge_soft_delete_on_destroy = false
recover_soft_deleted_key_vaults = true
}
}
# Autenticazione via variabili d'ambiente (consigliato)
# ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_TENANT_ID, ARM_SUBSCRIPTION_ID
# oppure Azure CLI: az login
}
# Risorse Azure di base
resource "azurerm_resource_group" "main" {
name = "${local.name_prefix}-rg"
location = "West Europe"
tags = local.common_tags
}
resource "azurerm_virtual_network" "main" {
name = "${local.name_prefix}-vnet"
address_space = ["10.1.0.0/16"]
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
tags = local.common_tags
}
resource "azurerm_subnet" "public" {
name = "public-subnet"
resource_group_name = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.1.1.0/24"]
}
resource "azurerm_subnet" "private" {
name = "private-subnet"
resource_group_name = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.1.2.0/24"]
}
Variables, Locals and Data Sources
Good use of variables, locals and data sources distinguishes a Terraform configuration professional from an ad hoc script. Here are the basic patterns:
# Tipi di variabili in HCL
# Stringa semplice
variable "app_name" {
type = string
default = "myapp"
}
# Numero con validazione
variable "min_instances" {
type = number
default = 1
validation {
condition = var.min_instances >= 1 && var.min_instances <= 10
error_message = "Il numero di istanze deve essere tra 1 e 10."
}
}
# Lista di stringhe
variable "allowed_cidr_blocks" {
type = list(string)
default = ["10.0.0.0/8"]
}
# Map di stringhe
variable "database_config" {
type = map(string)
default = {
engine = "postgres"
version = "15.4"
size = "db.t3.medium"
}
}
# Oggetto con schema tipizzato
variable "network_config" {
type = object({
vpc_cidr = string
subnet_count = number
enable_nat = bool
})
default = {
vpc_cidr = "10.0.0.0/16"
subnet_count = 2
enable_nat = false
}
}
# Locals: valori computati (non input dall'utente)
locals {
# Concatenazioni
full_name = "${var.environment}-${var.app_name}"
# Condizionale ternario
instance_type = var.environment == "prod" ? "t3.medium" : "t3.micro"
# Ciclo su lista (for expression)
subnet_cidrs = [
for i in range(var.network_config.subnet_count) :
cidrsubnet(var.network_config.vpc_cidr, 8, i)
]
# Map transformation
tag_map = {
for k, v in var.database_config :
"db_${k}" => v
}
}
# Data sources: recuperano informazioni da risorse esistenti
# non gestite da questo state
# AMI Amazon Linux 2023 piu recente
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["al2023-ami-*-x86_64"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
# Account ID corrente (utile per ARN)
data "aws_caller_identity" "current" {}
# Uso nei resource:
# ami = data.aws_ami.amazon_linux.id
# account_id = data.aws_caller_identity.current.account_id
The State File: The Heart of Terraform
The file terraform.tfstate is Terraform's "source of truth": map every asset
defined in HCL to its real-world counterpart in the cloud. Understanding how it works is essential
avoid the most serious problems.
// Struttura semplificata di terraform.tfstate
{
"version": 4,
"terraform_version": "1.9.0",
"serial": 42,
"lineage": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"outputs": {
"vpc_id": {
"value": "vpc-0123456789abcdef0",
"type": "string"
}
},
"resources": [
{
"mode": "managed",
"type": "aws_vpc",
"name": "main",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"schema_version": 1,
"attributes": {
"id": "vpc-0123456789abcdef0",
"cidr_block": "10.0.0.0/16",
"enable_dns_hostnames": true,
...
}
}
]
}
]
}
Golden Rules for the State File
- Never edit
terraform.tfstatemanually - Never commit state it in your Git repository (add it to
.gitignore) - In teams, use always a remote backend with locking (see Article 03 of the series)
- The state can contain it secrets out in the open (RDS password, credentials): Handle this as sensitive data
- Before risky operations, perform a manual backup:
terraform state pull > backup.tfstate
State Inspection Commands
# Lista tutte le risorse gestite dallo state
terraform state list
# Mostra i dettagli di una risorsa specifica
terraform state show aws_vpc.main
# Rinomina una risorsa nello state (senza toccare l'infrastruttura)
terraform state mv aws_subnet.public aws_subnet.public_new
# Rimuove una risorsa dallo state (Terraform non la gestirà piu)
# La risorsa rimane nel cloud ma non è piu "owned" da Terraform
terraform state rm aws_subnet.private[0]
# Importa una risorsa esistente nello state
# (vedi articolo 03 per dettagli)
terraform import aws_vpc.main vpc-0123456789abcdef0
# Mostra l'output senza fare apply
terraform output vpc_id
terraform output -json # tutti gli output in formato JSON
Recommended Project Structure
For real projects, the single directory structure quickly becomes unmanageable. Here are two common patterns for organizing a Terraform infrastructure in production:
# Pattern 1: Organizzazione per ambiente
infra/
├── modules/ # Moduli riusabili
│ ├── networking/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── compute/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
├── environments/
│ ├── dev/
│ │ ├── main.tf # Usa i moduli
│ │ ├── variables.tf
│ │ ├── terraform.tfvars # Valori specifici per dev
│ │ └── versions.tf
│ ├── staging/
│ │ └── ...
│ └── prod/
│ └── ...
└── .gitignore # Esclude .terraform/, *.tfstate
# Pattern 2: Terragrunt (wrapper per DRY)
infra/
├── terragrunt.hcl # Config globale
├── modules/
│ └── networking/
├── live/
│ ├── dev/
│ │ └── networking/
│ │ └── terragrunt.hcl
│ └── prod/
│ └── networking/
│ └── terragrunt.hcl
Best Practices for Getting Started
After seeing the theory, here are the rules of thumb that distinguish those who use Terraform correctly by those who create technical debt from day one:
# 1. Formatta sempre prima di committare
terraform fmt -recursive
# 2. Valida la sintassi
terraform validate
# 3. Usa tflint per linting avanzato (trova deprecazioni, pattern errati)
tflint --init
tflint
# 4. Esegui checkov per security scanning
pip install checkov
checkov -d .
# 5. Pre-commit hooks per automatizzare tutto
# .pre-commit-config.yaml
# repos:
# - repo: https://github.com/antonbabenko/pre-commit-terraform
# hooks:
# - id: terraform_fmt
# - id: terraform_validate
# - id: terraform_tflint
# - id: terraform_checkov
The .gitignore for Terraform
# .gitignore per progetti Terraform
.terraform/
*.tfstate
*.tfstate.backup
*.tfstate.*.backup
.terraform.tfstate.lock.info
*.tfvars # Se contengono segreti (crea terraform.tfvars.example)
override.tf
override.tf.json
*_override.tf
*_override.tf.json
crash.log
.terraformrc
terraform.rc
Complete Example: Web App on AWS
Let's put it all together in a real example: an EC2 instance with security group, load balancer and RDS database, structured correctly.
# main.tf — infrastruttura web app completa
# Security Group per le istanze web
resource "aws_security_group" "web" {
name = "${local.name_prefix}-sg-web"
description = "Security group per istanze web"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP dal load balancer"
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = { Name = "${local.name_prefix}-sg-web" }
}
# Security Group per ALB
resource "aws_security_group" "alb" {
name = "${local.name_prefix}-sg-alb"
description = "Security group per Application Load Balancer"
vpc_id = aws_vpc.main.id
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Launch Template per le istanze EC2
resource "aws_launch_template" "web" {
name_prefix = "${local.name_prefix}-lt-"
image_id = data.aws_ami.amazon_linux.id
instance_type = local.instance_type
network_interfaces {
associate_public_ip_address = false
security_groups = [aws_security_group.web.id]
}
user_data = base64encode(<<-EOF
#!/bin/bash
dnf update -y
dnf install -y nginx
systemctl enable nginx
systemctl start nginx
echo "Deploy Terraform - ${var.environment}
" > /usr/share/nginx/html/index.html
EOF
)
tag_specifications {
resource_type = "instance"
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-web"
})
}
}
# Auto Scaling Group
resource "aws_autoscaling_group" "web" {
name = "${local.name_prefix}-asg"
vpc_zone_identifier = aws_subnet.private[*].id
target_group_arns = [aws_lb_target_group.web.arn]
health_check_type = "ELB"
min_size = var.environment == "prod" ? 2 : 1
max_size = var.environment == "prod" ? 6 : 2
desired_capacity = var.environment == "prod" ? 2 : 1
launch_template {
id = aws_launch_template.web.id
version = "$Latest"
}
tag {
key = "Name"
value = "${local.name_prefix}-web"
propagate_at_launch = true
}
}
# Application Load Balancer
resource "aws_lb" "main" {
name = "${local.name_prefix}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = aws_subnet.public[*].id
enable_deletion_protection = var.environment == "prod"
tags = local.common_tags
}
resource "aws_lb_target_group" "web" {
name = "${local.name_prefix}-tg-web"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
health_check {
enabled = true
healthy_threshold = 2
unhealthy_threshold = 3
timeout = 5
interval = 30
path = "/health"
}
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.web.arn
}
}
Conclusions and Next Steps
You now have a solid foundation for working with Terraform professionally: you know the syntax HCL, you know how to configure AWS and Azure providers, you understand the Plan-Apply-Destroy cycle and the rules essential to manage the state file securely.
The next natural step in your growth with Terraform is learning to organize your code in reusable modules: Once your infrastructure exceeds 200-300 lines of HCL, modularization becomes essential for maintainability.
The Complete Series: Terraform and IaC
- Article 01 (this) — Terraform from Scratch: HCL, Provider and Plan-Apply-Destroy
- Article 02 — Designing Reusable Terraform Modules: Structure, I/O and Registry
- Article 03 — Terraform State: Remote Backend with S3/GCS, Locking and Import
- Article 04 — Terraform in CI/CD: GitHub Actions, Atlantis and Pull Request Workflow
- Article 05 — IaC Testing: Terratest, Terraform Native Test and Contract Testing
- Article 06 — IaC Security: Checkov, Trivy and OPA Policy-as-Code
- Article 07 — Terraform Multi-Cloud: AWS + Azure + GCP with Shared Modules
- Article 08 — GitOps for Terraform: Flux TF Controller, Spacelift and Drift Detection
- Article 09 — Terraform vs Pulumi vs OpenTofu: Final Comparison 2026
- Article 10 — Terraform Enterprise Patterns: Workspace, Sentinel, and Team Scaling
Official Resources
- Terraform official documentation
- Terraform Registry — providers, modules and policies
- HashiCorp Learn — Interactive tutorials







