One VPC for everything or a VPC per app, what's sensible?
For most SMBs, one VPC per environment (prod, staging) is enough, with separate subnets per app or tier. More VPCs means more peering, more NAT, more cost.
Try this first
- 1Start with one VPC per environment and a /16 CIDR roomy enough not to fill up in 5 years, e.g. 10.0.0.0/16 for prod and 10.1.0.0/16 for staging.
- 2Split per AZ into a public and private subnet, and put all workloads in private. Public is only for load balancers and NAT gateways.
- 3Use separate subnets or security-groups per app, not separate VPCs. That keeps routing simple and NAT cost low.
- 4Only consider a separate VPC if the app has a different security profile (tenant isolation, payments, compliance scope).
- 5Document your CIDR plan in a shared table, otherwise you'll get collisions the moment someone wants to connect on-prem or another cloud.
When to bring us in
If you're building multi-tenant SaaS where each customer needs network isolation, the design is materially different. A short review pays off before you commit.
See also
- Everyone logs in with the AWS root accountRoot is for emergencies and billing. Day-to-day work belongs in IAM users or SSO.
- Every developer has AdministratorAccessAdministratorAccess everywhere is convenient now, painful later. Start with role-based policies.
- Everyone has individual IAM users with their own passwordIdentity Center (formerly AWS SSO) links to your IdP and issues temporary credentials per session.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.