What is Cloud Hosting?
Cloud hosting is hosting that is available on-demand and can quickly and automatically scale up and down based on your immediate needs.
Cloud is not only about hosting your shop with a cloud hosting provider like AWS, Microsoft Azure, Google Cloud Platform, Digital Ocean or other cloud hosting providers, but is more about how your shop's hosting is architected.
To understand the differences between cloud hosting and the traditional alternatives, we'll compare the following:
Single Shared or Dedicated Server
Serverless & Kubernetes - this is what powers Zento
A Practical Example: Highways
Understanding Cloud can be quite difficult, so we'll try to compare it to a real world example: highways. Our fictional highways are free to build, but cost to operate, just like servers.
Let's assume we have a highway that can take 1000 cars/hour, while the average traffic is of 500 cars/hour. During rush hour, the traffic scales closer to 800 cars/hour which still results in a very good flow. But when the vacation season starts and traffic goes up to 5000 cars/hour, the traffic grinds to a halt; some passengers even turn around and go look for alternate routes. At the same time, during the night, there are less than 50 cars/hour, which is wasteful considering the 1000 cars/hour capacity.
Scaling out the highway with another lane for the vacation is possible in our theoretical world, but does require a full shutdown of traffic; same shutdown is needed for scaling down the highway after the vacation season. And the even bigger problem is that you have to know in advance if you'll have peak hours of 1500 or 4000 cars/hour.
There's also the daily case of an oversized transport, which slows down the regular traffic a lot while it's transiting your highway. Also, if a bridge collapses on your highway it would result in a complete shutdown of traffic on the entire highway.
We'll see how each of the setup hosting models match the highway example.
Single Shared or Shared Server
This is probably the most traditional hosting setup and is the solution offered by all the non-cloud providers, as well as the default setup you would get when simply starting a server on a cloud provider and deploying your shop's code to it.
In this setup, your shop runs on a shared server, virtual private server (VPS) or dedicated server. Optionally, the database server might be a separate server, which although would make it multi-server, from the application's viewpoint is still single-server.
With the shared server, your shop shares the physical server's resources with other applications running on the server, which can cause your shop to have performance issues even when it's not hit by a lot of traffic, if other applications on that shared server are.
With VPS, your shop runs on a virtual server inside a larger physical server and has reserved resources just for it. In this case, load from other virtual servers should not affect the performance of your server.
With dedicated servers, your shop runs on a fully dedicated physical server, but otherwise it's just like a larger VPS.
In all these setups, all the traffic hits your main server which can't scale out to respond to the traffic requirements.
Like with our highway example, the server you have provisioned for year-round operations is sized slightly above the regular peaks, but during your big campaign events this is completely insufficient; scaling out
horizontally is not possible, so you can only scale up
vertically: shut down your server, increase its capacity and start it back up; this would not even be possible with a dedicated server or if your non-cloud provider doesn't have capacity to scale up your VPS. So what happens if your server can't scale or if your peak is unforeseen, like an influencer with a large following posting about your products? The shop gets slower and slower and eventually crashes completely, while there's nothing you can do to react.
The oversized transport from the highway example is present on your server in the form of large batch jobs, like product mass-updates, that periodically run on your server. This causes a slowdown that is experienced by users, which we know to be frustrating and often result in a decreased conversion.
Finally, if any failure happens on your server, your shop goes offline, just like the highway would be closed in case of a bridge colapse.
Single server setups are inexpensive and easy to set up, however they are not scalable with traffic and quite expensive since they need to be sized at peak traffic rather than average traffic. They are also subject to failures, as anything that happens to that single server would take the whole shop offline. Let's see how multi-server setups are better.
A multi-server setup would have your shop's code running on multiple servers, which allows adding of new servers and scaling out horizontally.
With non-cloud providers, adding new servers can take hours or even days and it would most probably require developer intervention.
With cloud providers, adding a new server can happen in 15 minutes and with some effort, it can even be automated so that no developer intervention is needed when scaling.
However, your shop still needs to be prepared to run on multiple servers, which none of the popular open-source solutions currently do out-of-the-box; therefore a significant up-front investment into this setup is generally needed.
In our highway example, this would translate into having two parallel highways running, each with a capacity of 1000 cars/hour. During vacation seasons, you could add 3 more highways running parallel to prepare for 3500 cars/hour and when the vacation season ends, they would be removed without any shutdown. Adding new highways would still take a bit of time, so you're still vulnerable to sudden increases in traffic.
You would provision at least two servers running behind a load balancer. When needed, you would add another server (aka scale out horizontally) and route traffic to it. With cloud providers like AWS, Azure or GCP, you can have setups that scale with a single-click, but it still can take 15 minutes or more for the server to be ready to take on traffic, while with non-cloud providers, adding new servers could take hours or days.
The batch jobs that are periodically running would need to run on a 3rd separate server to make sure those jobs don't affect user visits. So this multi-server setup needs at least two servers for visitors, another server for batch jobs and separate servers for databases and caching, which does have an increased operational cost.
Multi-server setups are harder and more expensive to set up, but they are somewhat scalable with traffic and are not subject to failures. They are very expensive to run since you would require at least 4 servers and, considering the long time needed for scaling up, they need to be sized closer to the regular daily peaks rather than the average traffic.
So let's see how the Serverless & Kubernetes are better.
Serverless & Kubernetes
Kubernetes is a container management system which we explained in more detail in the
pods) that run on servers (called
container naming for this concept originates from the shipping containers that are standard in shape & size and can be loaded onto cargo ships, regardless of their contents. Your application needs to have at least two pods running, to make sure that when user visits hit the shop they are ready to serve them quickly.
Serverless is similar to the container approach, with the major difference that the management is done by the cloud provider and you only pay for each page served (for 1 million calls you would pay $6.67 if execution takes 0.3s and uses 1GB of memory). Your shop no longer needs to pay for idle time waiting for traffic, this is something that the cloud provider does for free.
Serverless is only available at major cloud providers and although Kubernetes can be set up on non-cloud providers, it rarely is, due to its high complexity.
In our highway example, the Kubernetes approach would be the highway of the future: new 100 cars/hour lanes are automatically added in seconds depending on traffic needs; during vacation season or if suddenly traffic peaks, your highway would scale up with new lanes and then scale back down as soon as traffic decreases; during the night it can go down to two lanes. Serverless is even more advanced: you don't need to manage any highway infrastructure or think about scaling and you pay only for the number of cars transiting. Oversized traffic would get its own wide lane, isolated from any regular traffic.
With Kubernetes there's a complete separation between servers (nodes) and the containers running on them (pods), so they can scale out as much as needed. You can find more details in the
With Serverless, the separation is even better, as management of the underlying systems is completely handled by the cloud provider and you only need to focus the application code.
The only downside to Kubernetes is that setups are complex and hard to build and manage. Adapting applications to run on Serverless can sometimes be very difficult or even impossible.
So Kubernetes and Serverless are difficult to set up on your own, but they are the most scalable solution, completely impervious to failures and most cost effective to run since you pay only for the compute capacity used.
When thinking about Cloud hosting, the advantages you would be looking for your shop are scalability, redundancy and cost efficiency.
Since Zento's hosting is powered by Kubernetes and Serverless, it is the most advanced setup available today with a strong focus on scalability, reliability and cost efficiency.