Two people ran terraform apply at the same time and the state is corrupt
Remote state with locking is mandatory once more than one person works on it. S3+DynamoDB for AWS, Storage Account for Azure, GCS for GCP. One-time setup, no more race conditions.
Try this first
- 1Create an S3 bucket with versioning for state files. A separate bucket per environment limits blast radius.
- 2Create a DynamoDB table with primary key LockID (string). No capacity worries, on-demand billing mode is enough.
- 3In each Terraform config, set the backend with bucket, key, region and dynamodb_table. Run terraform init -migrate-state.
- 4On Azure: Storage Account with use_azuread_auth = true and lock via blob lease. On GCP: GCS bucket with versioning, locking is built-in.
- 5Enable state encryption (S3 default-encryption + bucket policy that enforces TLS). State holds secrets, you want AES and TLS on it.
When to bring us in
If you have a corrupted state file and need import work to realign resources, bring someone who runs terraform import daily. That limits the damage.
See also
- Everyone logs in with the AWS root accountRoot is for emergencies and billing. Day-to-day work belongs in IAM users or SSO.
- Every developer has AdministratorAccessAdministratorAccess everywhere is convenient now, painful later. Start with role-based policies.
- Everyone has individual IAM users with their own passwordIdentity Center (formerly AWS SSO) links to your IdP and issues temporary credentials per session.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.