> And running your own hardware is not incompatible with Kubernetes: on the contrary
Kubernetes actually makes so much more sense on bare-metal hardware.
On the cloud, I think the value prop is dubious - your cloud provider is already giving you VMs, why would you need to subdivide them further and add yet another layer of orchestration?
Not to mention that you're getting 2010s-era performance on those VMs, so subdividing them is terrible from a performance point of view too.
> Not to mention that you're getting 2010s-era performance on those VMs, so subdividing them is terrible from a performance point of view too.
I was trying in vain to explain to our infra team a couple of weeks ago why giving my team a dedicated node of a newer instance family with DDR5 RAM would be beneficial for an application which is heavily constrained by RAM speed. People seem to assume that compute is homogenous.
I would wager that the same kind of people that were arguing against your request for a specific hardware config are the same ones in this comment section railing against any sort of self-sufficiency by hosting it yourself on hardware. All they know is cloud, all they know how to do is "ScAlE Up thE InStanCE!" when shit hits the fan. It's difficult to argue against that and make real progress. I understand your frustration completely.
Kubernetes actually makes so much more sense on bare-metal hardware.
On the cloud, I think the value prop is dubious - your cloud provider is already giving you VMs, why would you need to subdivide them further and add yet another layer of orchestration?
Not to mention that you're getting 2010s-era performance on those VMs, so subdividing them is terrible from a performance point of view too.