Auto Scaling: what is it and how does it work?
November 30, 2022
February 2021 | by Amplifica Digital
If we ask the two people who come together the same cloud in the sky what form it has, we will probably receive very different answers from the two. The same thing happens when we ask CIOs what their perception of the public cloud is. Between the unicorns and the monsters, few know how to distinguish between the stormy cumulonimbus and the tall and light stratocumulus.
In this environment of rapid evolution, in search of identity, several myths about the public cloud have been developed. Generally fueled by misinformation and also by the interests of those who feel threatened by the paradigm shift. I believe it is worth bringing the perception of companies that have already migrated to the cloud and now can testify to what it means to have most of their systems in this invisible environment.
El primero de estos mitos refers to the performance of the cloud. What many times the cloud does not meet the expectations of users and leaves much to be desired in terms of availability. The case is that the majority of these quejas come from companies that have placed their systems in smaller data centers with various technological restrictions and that generally offer hosting solutions where the hardware is customer property. This is not the reality of the public cloud. In the cloud of the main providers such as AWS, Google, Microsoft and Oracle , the amount of resources is practically unlimited for the majority of customers and the quality of the service depends totally on the characteristics and configurations that you choose for your environment.
The sky is the limit in terms of income. If something doesn't work correctly, it's just a question of modifying the resource parameters to obtain more bandwidth, traffic, CPU, memory and storage. In addition, several tools allow working in the architecture of the solution in order to make them more easily scalable and thus enjoy a better performance in an optimized and economical way.
The second myth has to do with security. There is a perception that bringing data and systems to the cloud can increase exposure and vulnerability to security threats. Here, once more, we are victims of poor implementation of hosting solutions that claim to be in the cloud, but do not offer the basic security features available in large public clouds. The lack of knowledge about the security mechanisms that are natively available for all environments also contributes to this perception. Among them, we find complete solutions for system isolation, encryption, prevention and mitigation of attacks, monitoring and auditing, in addition to an extremely sophisticated and granular identification and access control system for all cloud resources.
It is extremely difficult to find companies that have this set of tools permanently updated with an automated SOC (Security Operations Center) and strict criteria for physical access to equipment, as is the case with the large public cloud providers. Therefore, it becomes a question of education and knowledge to understand that, in the majority of cases, migration to the cloud represents an improvement in the general security of solutions.
The third myth refers to the costs of the cloud. This theme can be quite complex because there are several models to buy the same resources. However, the myth that the public cloud is more expensive comes from poor comparisons with pure hosting solutions where the resources are allocated (purchased) statically with growth margins already included.
The fundamental question is that in the public cloud we can request expansions immediately and therefore it makes no sense to pre-assign resources that are not being used. As we pay for the hours of use, each time a server is inactive it represents a saving. The objective is to make the workloads work much more fairly in relation to consumption (correct sizing) and to use the ability to scale horizontally by adding more servers in the form of clusters of machines. Thus, we can create and finalize servers throughout the day, adjusting its service capacity to what is actually used; maybe the simplest example is simply programming the start and shutdown of the servers that operate only during business hours. In other words, through tracking, automation and the use of tools that allow the dynamic use of resources, we are able to demonstrate the economy of the cloud.
The last one is related to the construction of hybrid clouds (public / private) as a strategy to face the incapacity of the public cloud to accommodate certain applications. Of course, there will always be very specific and extremely difficult applications to move to the cloud, but these applications are much rarer than those that you see used as a justification for maintaining private clouds. To illustrate, we can mention the solutions that allow you to take the ERP to the cloud in a safe and efficient way. These are vital applications for companies, mostly written in inherited technologies and interconnected with several other services, but they have been completely transferred to the cloud with great success.
For the great majority of companies, the local infrastructure will be restricted to the users' computers and the Internet access that interconnects them to the cloud. The private part of the hybrid cloud appears only as a temporary solution to adapt to the inability to take everything to the cloud at once.
This content was produced by SkyOne's team of cloud and digital transformation experts.
Check out some related posts.