Terraform State: Remote Backend with S3/GCS, Locking and Import
If there is a single aspect of Terraform that causes the most accidents in production, it
and state management. I've seen teams waste hours fixing corrupt states,
divergent configurations because two developers did it apply in parallel,
“ghost” resources that exist in the cloud but not in the state. All these problems
they have a common cause: state local without locking.
This guide is technical and practical: we will configure an S3 backend with DynamoDB locking for AWS,
the GCS backend for Google Cloud, we will talk about workspaces for separating environments
and we'll see how import existing resources without downtime using
the block import introduced in Terraform 1.5 (finally declarative and secure).
What You Will Learn
- Why local state is dangerous in teams and how to migrate to remote backend
- Set up S3 backend with DynamoDB locking on AWS (production-ready)
- Configure GCS backend on Google Cloud Platform
- Workspace Terraform: pattern for separating environments
- Import existing assets with locking
import(Terraform 1.5+) - Emergency operations: state manipulation, forced unlock, backup
- Partial backend configuration: manage credentials without hardcoding
Why the Remote Backend is Fundamental
The file terraform.tfstate local has four fundamental problems in a team:
-
No locking: if two developers execute
terraform applyat the same time, both read the same state and write partial changes, leading to corrupt state and duplicate resources. - Not shared: each developer has his own local copy. Who has the latest version? Who made the last apply? Impossible to know.
- Contains secrets: the state often includes sensitive values (RDS password, API keys). It never goes into Git.
- No history: there is no audit trail of who did what and when.
The remote backend solves all these problems: centralized state, atomic locking, encryption at rest and versioning for rollback.
Bootstrap: Creating Backend Resources with Terraform
The paradox of the Terraform backend is that you have to create AWS resources (S3 bucket + DynamoDB table) Before to be able to configure the backend. The solution is to bootstrap with a mini-project separate that uses the local backend.
# bootstrap/main.tf — crea le risorse per il backend remoto
# Questo progetto usa il backend locale (terraform.tfstate nella directory)
# Committalo nel repo come "infra/bootstrap/"
terraform {
required_version = ">= 1.8.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# NOTA: qui non c'e backend block — usa il file locale
}
provider "aws" {
region = "eu-west-1"
}
locals {
name = "acme"
environment = "global"
}
# S3 Bucket per lo state
resource "aws_s3_bucket" "terraform_state" {
bucket = "${local.name}-terraform-state"
# Protezione contro cancellazione accidentale
lifecycle {
prevent_destroy = true
}
tags = {
Name = "${local.name}-terraform-state"
ManagedBy = "Terraform"
Environment = local.environment
}
}
# Versioning: ogni apply crea una nuova versione del file di state
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
# Encryption at rest: obbligatorio per state con segreti
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
bucket_key_enabled = true # Riduce i costi KMS del 99%
}
}
# Blocca accesso pubblico: lo state NON deve essere pubblico
resource "aws_s3_bucket_public_access_block" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# Policy del bucket: nega qualsiasi accesso non-TLS
resource "aws_s3_bucket_policy" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "DenyNonTLS"
Effect = "Deny"
Principal = "*"
Action = "s3:*"
Resource = [
aws_s3_bucket.terraform_state.arn,
"${aws_s3_bucket.terraform_state.arn}/*"
]
Condition = {
Bool = {
"aws:SecureTransport" = "false"
}
}
}
]
})
}
# DynamoDB table per il locking
resource "aws_dynamodb_table" "terraform_locks" {
name = "${local.name}-terraform-locks"
billing_mode = "PAY_PER_REQUEST" # Nessun costo fisso
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
# Protezione contro cancellazione
lifecycle {
prevent_destroy = true
}
tags = {
Name = "${local.name}-terraform-locks"
ManagedBy = "Terraform"
Environment = local.environment
}
}
# Output: valori da copiare nel backend block dei progetti
output "state_bucket_name" {
value = aws_s3_bucket.terraform_state.id
description = "Nome del bucket S3 per lo state Terraform"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_locks.name
description = "Nome della tabella DynamoDB per il locking"
}
output "aws_region" {
value = "eu-west-1"
description = "Region AWS dove sono create le risorse del backend"
}
Configure the S3 Backend
Once you've created your bootstrap resources, set up the backend in your projects.
The block backend does not support Terraform variables (and a known limitation),
so use the partial backend configuration to separate credentials.
# versions.tf — configurazione backend S3
terraform {
required_version = ">= 1.8.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# Backend con partial configuration (valori statici non-sensibili)
backend "s3" {
bucket = "acme-terraform-state"
key = "environments/prod/networking/terraform.tfstate"
region = "eu-west-1"
dynamodb_table = "acme-terraform-locks"
encrypt = true
# NON mettere access_key e secret_key qui!
# Usa variabili di ambiente o IAM Role
}
}
# La struttura raccomandata per il "key" (path del file):
# {account}/{environment}/{stack}/terraform.tfstate
#
# Esempi:
# "123456789/dev/networking/terraform.tfstate"
# "123456789/prod/eks-cluster/terraform.tfstate"
# "123456789/shared/monitoring/terraform.tfstate"
# Inizializzazione con partial backend config
# (passa i valori mancanti come -backend-config)
terraform init \
-backend-config="bucket=acme-terraform-state" \
-backend-config="key=environments/dev/networking/terraform.tfstate" \
-backend-config="region=eu-west-1" \
-backend-config="dynamodb_table=acme-terraform-locks"
# Oppure con file di backend config
# backend.hcl (NON committare se contiene credenziali)
# bucket = "acme-terraform-state"
# key = "environments/dev/networking/terraform.tfstate"
# region = "eu-west-1"
# dynamodb_table = "acme-terraform-locks"
terraform init -backend-config=backend.hcl
# Migra da backend locale a remoto
terraform init -migrate-state
# Terraform chiede conferma prima di copiare il state locale su S3
The DynamoDB Lock
When terraform apply and running, a row is created in DynamoDB
with LockID = {bucket}/{key}. If the process is stopped abruptly
(kill, crash), the lock may not be released. To force unlock:
terraform force-unlock {LOCK_ID}
# Il LOCK_ID e visibile nel messaggio di errore quando Terraform trova il lock
USA force-unlock only if you are certain that there is no one else
apply in progress. Unblocking while an apply is in progress can corrupt the state.
GCS Backend for Google Cloud Platform
# versions.tf — backend GCS
terraform {
required_version = ">= 1.8.0"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}
backend "gcs" {
bucket = "acme-terraform-state-eu"
prefix = "environments/prod/networking"
# prefix = "environments/{environment}/{stack}"
# Il file effettivo sara: {prefix}/default.tfstate
}
}
# Crea il bucket GCS per lo state (bootstrap)
resource "google_storage_bucket" "terraform_state" {
name = "acme-terraform-state-eu"
location = "EU"
force_destroy = false
# Versioning per rollback
versioning {
enabled = true
}
# Encryption con CMEK (opzionale ma consigliato)
# encryption {
# default_kms_key_name = google_kms_crypto_key.terraform_state.id
# }
# Policy di retention: mantieni le versioni per 30 giorni
lifecycle_rule {
condition {
num_newer_versions = 5
with_state = "ARCHIVED"
}
action {
type = "Delete"
}
}
}
# Blocca l'accesso pubblico al bucket
resource "google_storage_bucket_iam_binding" "terraform_state_private" {
bucket = google_storage_bucket.terraform_state.name
role = "roles/storage.admin"
members = [
"serviceAccount:terraform@acme-project.iam.gserviceaccount.com"
]
}
Workspace Terraform: Separation of Environments
Terraform workspaces allow you to maintain separate state files from
same HCL configuration. Each workspace has its own state file in the backend:
env:/{workspace_name}/terraform.tfstate.
# Gestione dei workspace
terraform workspace list
# * default
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
terraform workspace list
# default
# * dev
# staging
# prod
terraform workspace select prod
terraform workspace show # stampa il nome del workspace corrente
# Cancella un workspace (deve essere vuoto)
terraform workspace delete staging
# Uso del workspace nella configurazione HCL
locals {
# Recupera il nome del workspace corrente
workspace = terraform.workspace
# Configurazioni specifiche per workspace
workspace_config = {
dev = {
instance_type = "t3.micro"
min_instances = 1
max_instances = 2
enable_monitoring = false
rds_class = "db.t3.micro"
multi_az = false
}
staging = {
instance_type = "t3.small"
min_instances = 1
max_instances = 3
enable_monitoring = true
rds_class = "db.t3.small"
multi_az = false
}
prod = {
instance_type = "t3.medium"
min_instances = 2
max_instances = 6
enable_monitoring = true
rds_class = "db.t3.medium"
multi_az = true
}
}
# Accede alla configurazione del workspace corrente
# con fallback a "dev" se il workspace non e nella mappa
current_config = lookup(local.workspace_config, local.workspace, local.workspace_config["dev"])
# Naming con workspace nel prefisso
name_prefix = "${local.workspace}-${var.project_name}"
}
# Usa i valori dalla configurazione del workspace
resource "aws_autoscaling_group" "web" {
min_size = local.current_config.min_instances
max_size = local.current_config.max_instances
desired_capacity = local.current_config.min_instances
# ...
}
resource "aws_db_instance" "main" {
instance_class = local.current_config.rds_class
multi_az = local.current_config.multi_az
# ...
}
Workspace vs Separate Directories: When to Use What
Workspaces are suitable for environments structurally identical that differ for scaling only (dev/staging/prod). If the environments have different architectures (e.g. prod has a VPN, dev doesn't), use separate directories who reuse modules. Many teams with complex infrastructures always prefer separate directories for maximum clarity.
Import Existing Resources with the Import Block
One of the most common cases when adopting Terraform in an existing organization is having to
manage resources created manually or with other tools. Before Terraform 1.5, import was
only imperative (terraform import CLI), risky and unverifiable.
The block import (GA in 1.5, improved in 1.7 with
import generate) and declarative and confident.
# main.tf — import dichiarativo (Terraform 1.5+)
# Step 1: definisci il blocco import
import {
# ID della risorsa nel cloud provider
id = "vpc-0123456789abcdef0"
# Punta alla risorsa HCL che vuoi associare
to = aws_vpc.main
}
# Step 2: scrivi la configurazione HCL della risorsa
# (puoi usare "terraform plan -generate-config-out=generated.tf" per auto-generarla)
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "prod-main-vpc"
Environment = "prod"
ManagedBy = "Terraform" # Aggiunta da noi
}
}
# Step 3: terraform plan verifica che lo state da importare
# corrisponda alla configurazione HCL.
# Se ci sono differenze, Terraform le evidenzia come "~ to change"
# Import multiplo: importa un intero gruppo di risorse
import {
id = "subnet-0abc123"
to = aws_subnet.public[0]
}
import {
id = "subnet-0def456"
to = aws_subnet.public[1]
}
# Import con for_each (Terraform 1.7+)
locals {
existing_subnets = {
"public-1" = "subnet-0abc123"
"public-2" = "subnet-0def456"
}
}
import {
for_each = local.existing_subnets
id = each.value
to = aws_subnet.public[each.key]
}
# Workflow completo di import
# 1. Genera automaticamente la configurazione HCL dalla risorsa esistente
# (Terraform 1.5+ con -generate-config-out)
terraform plan -generate-config-out=generated_imports.tf
# 2. Rivedi il file generato: contiene la configurazione della risorsa
# come Terraform la vede nel cloud provider
cat generated_imports.tf
# 3. Copia e adatta la configurazione nel tuo main.tf
# (il file generated non e perfetto, va revisionato)
# 4. Esegui plan per verificare che l'import sia corretto
terraform plan
# Se il plan mostra "Plan: 0 to add, 0 to change, 0 to destroy"
# con note "Will import", l'import e perfetto
# 5. Apply: esegue l'import effettivo e aggiorna lo state
terraform apply
# 6. Rimuovi i blocchi import dal codice dopo l'apply
# (non sono piu necessari una volta che le risorse sono nello state)
State Manipulation: Emergency Operations
There are situations in which it is necessary to manipulate the state directly. These operations are powerful and risky: always perform them with a preventive backup.
# SEMPRE fare backup prima di operazioni sullo state
terraform state pull > backup-$(date +%Y%m%d-%H%M%S).tfstate
# Rimuove una risorsa dallo state (la risorsa rimane nel cloud)
# Uso: quando vuoi che Terraform "dimentichi" una risorsa
# senza cancellarla (es. passi a gestirla con un altro tool)
terraform state rm aws_instance.legacy_server
# Sposta una risorsa da un indirizzo a un altro nello state
# Uso: refactoring del codice senza distruggere le risorse
terraform state mv aws_instance.web aws_instance.web_new
terraform state mv 'aws_subnet.public[0]' 'aws_subnet.public["eu-west-1a"]'
# Mostra i dettagli JSON di una risorsa nello state
terraform state show aws_vpc.main
# Pull dello state remoto in locale (utile per ispezione)
terraform state pull > current.tfstate
# Push di uno state locale sul backend remoto
# PERICOLOSO: sovrascrive lo state remoto senza conferma
# Usare solo per recovery da state corrotto
terraform state push recovered.tfstate
# Refresh: aggiorna lo state leggendo lo stato reale del cloud
# (ora deprecato a favore di terraform apply -refresh-only)
terraform apply -refresh-only
State Partitioning: Strategies for Large Teams
In organizations with large infrastructures, having a single monolithic state becomes
a problem: every plan must control hundreds of resources, locks block
the whole team, and an error in one module blocks the entire deployment. The solution is to divide
you stay in it independent stacks.
# Struttura raccomandata per infrastrutture large-scale
# Ogni directory e un progetto Terraform indipendente con il suo state
environments/
├── prod/
│ ├── networking/ # State: prod/networking/terraform.tfstate
│ │ ├── main.tf
│ │ └── versions.tf
│ ├── eks-cluster/ # State: prod/eks-cluster/terraform.tfstate
│ │ ├── main.tf
│ │ └── versions.tf
│ ├── databases/ # State: prod/databases/terraform.tfstate
│ │ ├── main.tf
│ │ └── versions.tf
│ └── monitoring/ # State: prod/monitoring/terraform.tfstate
│ ├── main.tf
│ └── versions.tf
└── dev/
└── ...
# Per passare informazioni tra stack: terraform_remote_state data source
# stack eks-cluster recupera gli output da networking
data "terraform_remote_state" "networking" {
backend = "s3"
config = {
bucket = "acme-terraform-state"
key = "environments/prod/networking/terraform.tfstate"
region = "eu-west-1"
}
}
resource "aws_eks_cluster" "main" {
name = "prod-cluster"
role_arn = aws_iam_role.eks.arn
vpc_config {
# Usa gli output dallo stack networking
subnet_ids = data.terraform_remote_state.networking.outputs.private_subnet_ids
endpoint_private_access = true
endpoint_public_access = false
}
}
# Alternativa moderna: usare variabili con file .tfvars invece di
# terraform_remote_state. Piu sicuro (evita dipendenze tra state),
# meno conveniente (richiede aggiornamento manuale dei valori)
IAM Policy for the S3 Backend
In production, the IAM role used by Terraform (developers or CI/CD) must have the minimum permissions needed for the backend. Here is the complete policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TerraformStateBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketVersioning",
"s3:GetEncryptionConfiguration"
],
"Resource": [
"arn:aws:s3:::acme-terraform-state",
"arn:aws:s3:::acme-terraform-state/*"
]
},
{
"Sid": "TerraformStateLocking",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:DescribeTable"
],
"Resource": "arn:aws:dynamodb:eu-west-1:*:table/acme-terraform-locks"
},
{
"Sid": "TerraformKMSDecryptState",
"Effect": "Allow",
"Action": [
"kms:GenerateDataKey",
"kms:Decrypt"
],
"Resource": "arn:aws:kms:eu-west-1:*:key/*"
}
]
}
Conclusions and Next Steps
State management is the foundation on which you build everything else in your setup Terraform. A remote backend with proper locking protects you from an entire class of operational accidents. Take the time to set it up correctly from the start: migrate from state local to remote on an existing and feasible but laborious project.
The next step is to integrate Terraform into a professional CI/CD pipeline:
don't just execute plan e apply automatically, but manage
the plan review workflow in Pull Requests, the blocking of unauthorized applications
and notification of detected drifts.
The Complete Series: Terraform and IaC
- Article 01 — Terraform from Scratch: HCL, Provider and Plan-Apply-Destroy
- Article 02 — Designing Reusable Terraform Modules: Structure, I/O and Registry
- Article 03 (this) — Terraform State: Remote Backend with S3/GCS, Locking and Import
- Article 04 — Terraform in CI/CD: GitHub Actions, Atlantis and Pull Request Workflow
- Article 05 — IaC Testing: Terratest, Terraform Native Test and Contract Testing
- Article 06 — IaC Security: Checkov, Trivy and OPA Policy-as-Code
- Article 07 — Terraform Multi-Cloud: AWS + Azure + GCP with Shared Modules
- Article 08 — GitOps for Terraform: Flux TF Controller, Spacelift and Drift Detection
- Article 09 — Terraform vs Pulumi vs OpenTofu: Final Comparison 2026
- Article 10 — Terraform Enterprise Patterns: Workspace, Sentinel, and Team Scaling







