The thing is you are making an assumption here that our current design is somehow limiting from a deployment perspective. This very site auto scaled yesterday from 3 to 10 nodes cause CPU was high for … reasons. This happened automatically, the web image in the AWS container store got deployed on new EC2 instances, magically, with no human intervention.
You are trying to convince me here that we need to bite of a significant piece of work that will certainly make debugging docker in the wild for hobbyists harder for the greater good of following best practices.
Today, when a hobbyist has a problem I tell them:
./launcher rebuild app
It resolves most issues. The script does not need to reason about a pod and 4 inter-related services. We don’t have to worry about end users configuring logging and log rotation and a bunch of other complex stuff.
We do not host a monolith container with db+redis+app, we host web pods / db pods / redis pods and so on which have ipv6 addresses and using service discover from container labels we glue stuff together magically in our environment. Hobbyists do not need any of this.