Is `cdk bootstrap` safe to run on a production AWS system?

I have inherited a small AWS project, and the infra is built in CDK. I am relatively new to CDK.

I have a Bitbucket pipeline that deploys to our preprod environment fine. Since it feels reliable, I am now productionising it.

I detailed on a prior question that there is no context in the project for the production VPCs and subnets. I have been advised there that I can get AWS to generate the context file; I have not had much luck with that, so for now I have hand-generated it.

For safety I have made the deployment command a no-execute one:

cdk deploy --stage=$STAGE --region=eu-west-1 --no-execute --require-approval never

In production I get this error with the prod creds:

current credentials could not be used to assume ‘arn:aws:iam::$CDK_DEFAULT_ACCOUNT:role/cdk-xxxxxxxx-lookup-role-$CDK_DEFAULT_ACCOUNT-eu-west-1’, but are for the right account. Proceeding anyway.
Bundling asset VoucherSupportStack/VoucherImporterFunction/Code/Stage…

I then get:

❌ VoucherSupportStack failed: Error: VoucherSupportStack: SSM parameter /cdk-bootstrap/xxxxxxxx/version not found. Has the environment been bootstrapped? Please run ‘cdk bootstrap’ (see

I am minded to run cdk bootstrap in a production pipeline, on a once-off basis, as I think this is all it needs. We have very little CDK knowledge amongst my team, so I am a bit stuck on obtaining the appropriate reassurances – is this safe to run on a production AWS account?

As I understand it, it will just create a harmless "stack" that does nothing (unless we start using cdk deploy ...).

>Solution :

Yes, you need to bootstrap every environment (account/region) that you deploy to, including your production environment(s).

It is definitely safe to do – it’s what CDK expects.

You can scope the execution role down if you need (the default policy is AdministratorAccess).

Although your pipeline shouldn’t ideally be performing lookups during synth – the recommended way is to run cdk synth once with your production credentials, which will perform the lookups and populate the cdk.context.json file. You would then commit this file to VCS and your pipeline will use these cached values instead of performing the lookups every time.

Leave a Reply