The following is an excerpt from an article published at hyperconverged.org titled “Isn’t Linear Scaling Wasteful?”
In conversations with my customers surrounding hyperconvergence, some of the sharper folks will ask an important question about the hyperconvergence model. “If I buy a node with both compute and storage every time, isn’t that wasteful when I only need compute OR storage?” The assumption is that workloads will not scale in a linear fashion. So if the infrastructure scales linearly then one or more resources may be overprovisioned.
There are various ways hyperconvergence vendors deal with this. They may allow you to add compute nodes without any storage. Or they may offer different node configurations to allow a large addition of RAM with only a small addition of storage. Although these things can be helpful in certain situations, linear scaling is usually desirable. It may seem wasteful on the surface, but scaling out in consistent chunks will prove to be beneficial for several reasons:
- IO Performance
- Failure Domain
- Data Locality
Again, for a full run down, check out the original post!