Error switching back to Private=No

I noticed that I was paying $130 per month for a private rack, because of the NAT gateways. I didn’t really need this extra isolation, so I wanted to run: convox rack params set Private=No.

The CloudFormation update failed with this error: Export production:SubnetPrivate0 cannot be deleted as it is in use by production-

Fortunately the rollback was completed without any downtime.

Is there a way to migrate back to Private=No, or will I need to start from scratch with a new rack?


For what it’s worth we encountered the same issue, but the other way around (going from non-private to private). We ended up having to create a new rack from scratch, but would be interesting to know if there is any other solution?

I just remembered that I’ve run into some issues when using Spot instances, so I wonder if this is also related. I might try switching to only OnDemand instances before I try setting Private=No, and maybe that will work.

Unfortunately that didn’t work.

I ran:

convox rack params set SpotInstanceBid= OnDemandMinCount=3

Then waited for everything to be updated.

Then I ran:

convox rack params set Private=No

Still got the same error. CloudFormation immediately showed: UPDATE_ROLLBACK_IN_PROGRESS

  • Export production:SubnetPrivate1 cannot be deleted as it is in use by production-my-app

Would be great if the steps could be rearranged so that it deletes the private subnet after everything is updated.

We are facing the same issue (going from Private to Non-Private). It is still unresolved, and we’re also looking for a solution.

I believe it is an issue on CloudFormation end.

The problem is that the private subnets are in use by your applications.

You can try to convox apps params set Private=No on each of your applications and once all are done then change the Rack.

Awesome, thank you very much! I didn’t realize that this was the issue.

I ran convox apps params set Private=No, and that worked fine. After that, convox rack params set Private=No seems to work fine.

Thanks again!

Sorry, I spoke too soon! I thought the CloudFormation update was working this time, but it just failed:


Now I’m seeing “UPDATE_ROLLBACK_FAILED”, with Status reason:

The following resource(s) failed to update: [ApiMonitorService, VolumeTarget1, SpotInstances, VolumeTarget0, VolumeTarget2].

I will try to click “Continue Update Rollback”, and hopefully that will fix everything (but I guess I’m risking some downtime.)

Here is the screenshot from the failed rollback:


After clicking “Continue Update Rollback”, the rollback failed again:


But I clicked “Continue Update Rollback” one more time after that, and managed to get to “UPDATE_ROLLBACK_COMPLETE”. Thankfully there wasn’t any downtime during this process. (Also I’m not sure if there are any resources that need to be cleaned up.)

I think I can resolve the “SpotInstances” issue by removing the SpotInstanceBid value, so I’m using OnDemand instances. Not sure how to fix the AWS::EFS::MountTarget errors, or ApiMonitorService.

I’m a little late here, but having a similar issue - was unable to switch my rack to Private=Yes via the CLI due to rollbacks.

Checking CloudFormation was timing out around Instances and AWS::EFS::MountTarget - “unable to stabilize resource”.

Digging into it - it appears MountTargets need to update by replacement, and they need all the instances they’re mounted on to be restarted, meaning it’s going to be a slow process when a larger rack is updating.

I was looking at setting a ServiceRole to see how the timeouts would change, and decided to update my rack Parameter for Private=Yes via the CF web interface and… it’s successfully updated my rack this time. Perhaps adding Convoxa service role to the rack CF stack with the correct permissions will help resolve this for the CLI but I didn’t look further into it since I have a workaround for now… Perhaps this will help a future person Googling!