Is there a relation between how many processes an application can scale up to and the number of EC2 instances therefore the rack must have?
My guess is if an app is scaled to X number of processes (containers), you need at least X+1 EC2 instances in order to deploy one container per instance + a free instance to be able to deploy a new version without downtime. Is my assumption true?
Is it possible to have equal or even less EC2 instances in a rack compared to an app’s process count? What are the implications of it? Is it true that a single EC2 cannot contain two processes for the same app? If so, why is that? What’s the limiting factor?
I’d appreciate some clarification around this topic please.
I’m assuming you’re using gen2. In which case you can deploy as many instances of an app’s process as long as the underlying EC2 instances have the capacity. For a new deployment, you need to have at least one extra slot for your app’s process, available on the EC2 instances.
So to answer your question: yes you can have less EC2 instances than your app’s process count, although Convox requires at least 3 EC2 instances for a base rack.
Hi @crohr, thanks for your reply.
What about gen1? What makes gen2 special in this regard please?
gen2 uses ALB with target groups, gen1 uses ELB with direct mapping between ELB and EC2 ports, so if I recall correctly you can only have one process per EC2 instance mapped to the same ELB port.
Thank you @crohr. I appreciate you taking time to reply. ♂