Reverse Engineering Your AWS Estate into Terraform Using TerraClaim.Org
If you have ever inherited an AWS estate, you know the feeling before you can even describe it. Hundreds of resources spread across regions you did not know were enabled. Lambdas with no source repos. Config rules that predate the current team. IAM roles that look like they were generated by a sleep-deprived octopus at 2am during a compliance audit.
Eventually someone asks the question: “Can we just put all of this into Terraform?”
You can. But the tooling situation is messier than most guides let on, and the common advice to reach for Terraformer or Former2 is increasingly stale. This guide covers the current state of the tooling landscape honestly, then walks through a purpose-built open source script that automates the tedious parts correctly called https://terraclaim.org.
Why This Is Hard (And Why Most Guides Get It Wrong)
The instinct is to think of this as “exporting Terraform.” It is not. What you are actually doing is closer to reverse compilation: discovering all resources across accounts and regions, generating Terraform configuration from live infrastructure, reconstructing dependencies, capturing state, and then refactoring everything into something a human can maintain.
Two problems make this harder than it looks.
The tooling is older than it appears. Terraformer, the tool most guides recommend, was built by the Waze engineering team and has not had meaningful maintenance in years. It works, but it predates Terraform’s native import blocks and generates output that needs significant cleanup. Former2 is primarily a browser-based tool, and the CLI variant is a separate community project with limited coverage. Both are fine for getting a rough baseline, but neither should be your primary strategy in 2026.
AWS was never designed to be reverse-compiled. Resources reference each other in ways that tooling will not always catch. Some services do not map cleanly to Terraform resources no matter what you do. IAM is particularly brutal, and the relationship between roles, policies, attachments, and instance profiles is rarely clean in a lived-in estate. Accept these rough edges going in and you will be far less surprised.
The Tooling Landscape in 2026
Before reaching for a third-party tool, it is worth understanding what is actually available today.
Native Terraform Import Blocks (Terraform 1.5+)
This is the most important thing missing from older guides. Since Terraform 1.5, you can declare imports directly in your configuration as first-class HCL:
import {
to = aws_s3_bucket.my_bucket
id = "my-existing-bucket"
}
resource "aws_s3_bucket" "my_bucket" {} Combine this with the -generate-config-out flag and Terraform will populate the resource block from live state automatically:
terraform plan -generate-config-out=generated.tf This is version-controlled, reviewable in pull requests, previewed before apply, and supports for_each for bulk imports. For targeted imports of known resources, it is now the right default. The limitation is discovery, it does not tell you what resources exist. That is where scripting fills the gap.
Terraformer
Still useful for bulk discovery. Supports AWS, GCP and Azure, generates both HCL and state. The caveats: it is largely unmaintained, the output requires significant cleanup, and it predates provider version 5.x so generated code often needs attribute corrections. Use it as a starting point, not an end state.
Former2
A browser-based tool that scans your account via the AWS JavaScript SDK and generates HCL, CloudFormation, or Troposphere. Genuinely useful for targeted exports of specific resources. Requires a browser extension to bypass CORS on some services. There is no reliable CLI variant, treat any guide that uses former2 generate in a shell script with scepticism.
The Correct Approach for Large Estates
For anything beyond a handful of resources, the workflow that actually works is:
- Use scripted AWS CLI discovery to enumerate every resource across every account and region
- Generate native Terraform import blocks from that discovery
- Run
terraform plan -generate-config-outto populate the HCL from live state - Commit the messy baseline, then refactor incrementally
This is what the tool below automates.
Introducing terraclaim
terraclaim is an open source Bash script that sweeps your AWS estate across accounts and regions and generates ready-to-use Terraform import blocks and skeleton resource configurations, structured by account, region, and service.
GitHub: https://github.com/andrewbakercloudscale/terraclaim
Main Site: https://terraclaim.org
What It Covers
Out of the box it handles the most common services found in production AWS estates:
| Category | Services |
|---|---|
| Compute | EC2, EBS, ECS, EKS, Lambda |
| Networking | VPC, ELB/ALB/NLB, CloudFront, Route53, ACM, Transit Gateway, VPC Endpoints |
| Data | RDS, DynamoDB, ElastiCache, MSK, S3, EFS, OpenSearch, Redshift, DocumentDB |
| Streaming | Kinesis Data Streams, Kinesis Firehose |
| Integration | SQS, SNS, API Gateway, EventBridge, Step Functions, SES |
| Security & Compliance | IAM, KMS, Secrets Manager, WAFv2, Config, CloudTrail, GuardDuty |
| Platform & CI/CD | ECR, SSM, CloudWatch, Backup, CodePipeline, CodeBuild |
| Auth | Cognito (user pools + identity pools) |
| ETL | Glue (jobs, crawlers, databases, connections) |
| Storage & Transfer | FSx (Windows/Lustre/ONTAP/OpenZFS), Transfer (SFTP/FTPS) |
Adding a new service is two steps: write an export_<service>() function, add one line to the dispatch table. Nothing else changes.
Requirements
- AWS CLI v2
jq- Terraform >= 1.5
- Bash 4+
- Appropriate IAM permissions (ReadOnlyAccess is sufficient for discovery)
Getting Started
Install
git clone https://github.com/andrewbakercloudscale/terraclaim
cd terraclaim
chmod +x terraclaim.sh reconcile.sh drift.sh run.sh Verify your dependencies are in place:
aws sts get-caller-identity
terraform version # must be >= 1.5
jq --version Basic Usage
./terraclaim.sh [OPTIONS]
Options:
--accounts "123456789012,987654321098" Comma-separated account IDs
--regions "us-east-1,eu-west-1" Comma-separated regions
--services "ec2,s3,rds" Comma-separated service names
--role "OrganizationAccountAccessRole" Cross-account role to assume
--state-bucket "my-tf-state-bucket" S3 bucket for remote state backend
--state-region "us-east-1" Region of the state bucket
--output "./tf-output" Output root directory
--dry-run Print what would be exported without writing files
--debug Verbose logging Example Runs
Dry run: see what would be discovered before writing anything
Always start here. It will tell you exactly what the script intends to export without touching the filesystem.
./terraclaim.sh \
--regions "us-east-1" \
--services "ec2,vpc,rds" \
--dry-run Output:
[10:14:02] [INFO ] Dependencies OK (terraform 1.7.4)
[10:14:02] [WARN ] DRY-RUN mode — no files will be written
[10:14:02] [INFO ] ========================================
[10:14:02] [INFO ] Account: 123456789012
[10:14:02] [INFO ] ========================================
[10:14:02] [INFO ] Processing region: us-east-1
[10:14:03] [INFO ] [vpc] scanning...
[10:14:04] [INFO ] [vpc] 23 resources found
[10:14:04] [INFO ] [ec2] scanning...
[10:14:06] [INFO ] [ec2] 14 resources found
[10:14:06] [INFO ] [rds] scanning...
[10:14:08] [INFO ] [rds] 6 resources found
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/vpc/backend.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/vpc/imports.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/vpc/resources.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/ec2/backend.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/ec2/imports.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/ec2/resources.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/rds/backend.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/rds/imports.tf
[10:14:08] [INFO ] [DRY-RUN] Would write: ./tf-output/123456789012/us-east-1/rds/resources.tf Single account, targeted services, with remote state
./terraclaim.sh \
--regions "us-east-1,eu-west-1" \
--services "ec2,eks,rds,s3,vpc" \
--state-bucket my-tf-state-prod \
--state-region us-east-1 \
--output ./tf-output Output:
[10:22:01] [INFO ] Dependencies OK (terraform 1.7.4)
[10:22:01] [INFO ] ========================================
[10:22:01] [INFO ] Account: 123456789012
[10:22:01] [INFO ] ========================================
[10:22:01] [INFO ] Processing region: us-east-1
[10:22:02] [INFO ] [vpc] scanning...
[10:22:04] [INFO ] [vpc] 23 resources found
[10:22:04] [INFO ] Wrote backend.tf (s3://my-tf-state-prod/123456789012/us-east-1/vpc/terraform.tfstate)
[10:22:05] [INFO ] [ec2] scanning...
[10:22:09] [INFO ] [ec2] 14 resources found
[10:22:09] [INFO ] Wrote backend.tf (s3://my-tf-state-prod/123456789012/us-east-1/ec2/terraform.tfstate)
[10:22:10] [INFO ] [eks] scanning...
[10:22:14] [INFO ] [eks] 9 resources found
[10:22:14] [INFO ] Wrote backend.tf (s3://my-tf-state-prod/123456789012/us-east-1/eks/terraform.tfstate)
[10:22:15] [INFO ] [rds] scanning...
[10:22:18] [INFO ] [rds] 6 resources found
[10:22:19] [INFO ] [s3] scanning (filtering to region us-east-1)...
[10:22:24] [INFO ] [s3] 31 buckets found in us-east-1
[10:22:24] [INFO ] Processing region: eu-west-1
[10:22:25] [INFO ] [vpc] scanning...
[10:22:26] [INFO ] [vpc] 11 resources found
...
Next steps:
1. Review the generated import blocks:
find tf-output -name 'imports.tf' | head -5
2. For each service directory, generate full HCL from live state:
cd tf-output/123456789012/us-east-1/eks
terraform init
terraform plan -generate-config-out=generated.tf
3. Review generated.tf, remove drift, refactor into modules.
4. Commit as baseline-import branch — refactor incrementally from there.
NOTE: Always run 'terraform plan' after import to verify zero drift
before merging to main. Multi-account org sweep
./terraclaim.sh \
--accounts "123456789012,234567890123,345678901234" \
--role OrganizationAccountAccessRole \
--regions "us-east-1,eu-west-1,ap-southeast-2" \
--state-bucket my-tf-state-org \
--output ./tf-output \
--debug What the Output Looks Like
After a run, your output directory is structured like this:
tf-output/
├── summary.txt
└── 123456789012/
├── us-east-1/
│ ├── ec2/
│ │ ├── backend.tf
│ │ ├── imports.tf
│ │ └── resources.tf
│ ├── eks/
│ │ ├── backend.tf
│ │ ├── imports.tf
│ │ └── resources.tf
│ ├── lambda/
│ │ ├── backend.tf
│ │ ├── imports.tf
│ │ ├── resources.tf
│ │ └── _packages/
│ │ ├── my-auth-function.zip
│ │ └── my-worker-function.zip
│ └── rds/
│ ├── backend.tf
│ ├── imports.tf
│ └── resources.tf
└── eu-west-1/
└── ... Each service directory is a self-contained Terraform root module. The three generated files serve distinct purposes.
backend.tf: Remote state configuration and provider setup, pre-populated with the correct S3 key path for this account/region/service combination:
terraform {
backend "s3" {
bucket = "my-tf-state-prod"
key = "123456789012/us-east-1/eks/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
} imports.tf: Native Terraform 1.5+ import blocks, one per discovered resource:
import {
to = aws_eks_cluster.cluster_production
id = "production"
}
import {
to = aws_eks_node_group.ng_production_application
id = "production:application"
}
import {
to = aws_eks_node_group.ng_production_monitoring
id = "production:monitoring"
}
import {
to = aws_eks_addon.addon_production_coredns
id = "production:coredns"
}
import {
to = aws_eks_addon.addon_production_vpc_cni
id = "production:vpc-cni"
} resources.tf: Skeleton resource blocks ready for terraform plan -generate-config-out to populate:
resource "aws_eks_cluster" "cluster_production" {
# Auto-generated skeleton — run:
# terraform plan -generate-config-out=generated.tf
# to populate all attributes from live state.
}
resource "aws_eks_node_group" "ng_production_application" {}
resource "aws_eks_node_group" "ng_production_monitoring" {}
resource "aws_eks_addon" "addon_production_coredns" {}
resource "aws_eks_addon" "addon_production_vpc_cni" {} Generating Full HCL from Live State
Once the import blocks are in place, run this in any service directory:
cd tf-output/123456789012/us-east-1/eks
terraform init
terraform plan -generate-config-out=generated.tf Terraform will query every resource via the provider and write a fully populated generated.tf:
resource "aws_eks_cluster" "cluster_production" {
name = "production"
role_arn = "arn:aws:iam::123456789012:role/eks-cluster-role"
version = "1.29"
vpc_config {
subnet_ids = ["subnet-0a1b2c3d", "subnet-0e4f5a6b"]
endpoint_private_access = true
endpoint_public_access = false
security_group_ids = ["sg-0abc123def456"]
}
enabled_cluster_log_types = ["api", "audit", "authenticator"]
tags = {
Environment = "production"
Team = "platform"
}
} Review it, clean up any computed attributes Terraform cannot track, and run terraform apply to bind the live resource to state. Then run terraform plan once more, a clean run with no changes means the import is complete.
Does It Catch Everything?
No tool can guarantee full coverage of a large AWS estate, and this one is honest about that. After running the export, use the included reconcile.sh script to diff what was exported against what AWS Resource Explorer sees as the full account inventory.
./reconcile.sh \
--output ./tf-output \
--index-region us-east-1 This queries Resource Explorer’s aggregator index, compares every discovered resource against the generated import blocks, and produces a report grouped by region and service:
Summary
-------
Total resources (Resource Explorer): 847
Matched to exported import blocks: 801
Potentially missed: 46
Coverage: 94%
Potentially Missed Resources (grouped by region, then service)
Region: us-east-1
------------------------------------------------------------
Service: wafv2
Type: aws:wafv2:web-acl
ARN: arn:aws:wafv2:us-east-1:123456789012:regional/webacl/my-acl/abc123
Service: cognito-idp
Type: aws:cognito-idp:userpool
ARN: arn:aws:cognito-idp:us-east-1:123456789012:userpool/us-east-1_ABC123 Each entry in the missed list is a decision point: add it to the Terraform estate, mark it intentionally unmanaged, or open a PR to add the service exporter. The report tells you exactly which --services flag to use and points to CONTRIBUTING.md for adding new exporters.
A Note on Lambda Packages
Lambdas deserve special attention and the script handles this automatically. For every discovered function, it downloads the deployment package into _packages/:
lambda/
├── _packages/
│ ├── auth-service.zip
│ ├── payment-processor.zip
│ └── notification-worker.zip
├── backend.tf
├── imports.tf
└── resources.tf In many inherited estates, the deployment package in AWS is the only surviving copy of the code. The script retrieves it before you start making changes. You will thank yourself later.
The Parts That Will Not Go Smoothly
No guide should pretend this is clean. Here is what to expect.
IAM will be the hardest part. The relationship between roles, policies, and attachments in a lived-in account is rarely clean. The script imports what it can enumerate, but the dependencies between resources will likely need manual untangling. Budget time for it.
terraform plan will show drift. After importing, running terraform plan almost always reveals differences between what Terraform infers from the API and what is in your generated configuration. This is normal. Work through each difference, some will be computed attributes you can remove, some will be genuine configuration drift that needs a decision.
Some resources will be missed. The script covers the most common services found in production estates. The reconciliation report will surface the gaps. If you have unusual services, add an exporter following the pattern in CONTRIBUTING.md.
Provider version mismatches. Always pin your provider version before running any of this. The backend.tf files generated by the script pin to ~> 5.0, verify this matches your environment.
The Workflow That Works
The pattern that gets this over the finish line every time is the same regardless of estate size.
- Run the script with
--dry-runfirst. Verify the resource counts look right. - Run for real, starting with a single region and your most important services.
- For each service directory:
terraform init, thenterraform plan -generate-config-out=generated.tf. - Review
generated.tf. Remove or fix anything that causes a non-empty second plan. - Run
reconcile.shto identify gaps. Decide what to do with each missed resource. - Commit everything as a
baseline-importbranch, messy and imperfect is fine. - Refactor incrementally from there, one pull request at a time.
The failure mode on projects like this is always the same: waiting for the output to be clean before committing anything. It will never be clean enough if you set that as the bar. Commit the messy baseline. That is the whole point.
drift.sh — Detecting Changes After Day One
Once your baseline is committed, resources will be created or deleted outside Terraform over time. drift.sh re-scans AWS and diffs the results against your imports.tf files — no AWS Resource Explorer required, just the AWS CLI.
Resources found in AWS but missing from imports.tf are flagged as NEW. Resources in imports.tf that no longer exist in AWS are flagged as REMOVED. With --apply, new import blocks are appended and stale ones are commented out with a timestamp. Drop it into a nightly CI job and pipe the output to Slack for continuous governance without a commercial tool.
run.sh — One Command for the Entire Output Tree
After exporting, you previously had to cd into every service directory and run terraform init and terraform plan -generate-config-out=generated.tf manually. run.sh walks the entire output tree and does this for every service directory in one command, with up to three parallel Terraform processes, per-directory .run.log files, and a pass/fail/no-change summary at the end.
Both terraclaim.sh and drift.sh also support --parallel N for concurrent service scans (default 5) and automatically retry on AWS API throttling with exponential back-off — essential for large estates scanning dozens of services across multiple regions.
Contributing
The script is designed to be extended. Each service is a self-contained function following a consistent pattern. To add a new service:
- Write an
export_<service>()function following the conventions in the existing exporters - Add one line to the
SERVICE_MAPdispatch table - Add the service name to the
ALL_SERVICESarray - Open a pull request
See CONTRIBUTING.md in the repository for the full guide.