6

Autoscaling Databases in Kubernetes for MongoDB, MySQL, and PostgreSQL

 2 years ago
source link: https://www.percona.com/blog/2021/06/23/autoscaling-databases-in-kubernetes-for-mongodb-mysql-and-postgresql/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Autoscaling Databases in Kubernetes for MongoDB, MySQL, and PostgreSQL

autoscaling databases kubernetes mongodb mysqlAt some point, your needs for improving database performance may run into a threshold where optimization and tuning are no longer enough. If you cannot change the database engine and can no longer tune the parameters for your workload, you need to use scaling.

As it is widely known, scaling may be either vertical or horizontal. Vertical scaling means adding more resources to a single node. Usually is not related to database architecture in any special way – you just use faster storage (drives with better i/o capacity) or use different storages for different partitions, etc. On the contrary, horizontal scaling (adding and removing nodes) is all about distributed databases.

So we can say that vertical scaling is mainly node-specific, and horizontal scaling is specific to database architecture. That’s why we are going to deal with horizontal scaling in this post.

Horizontal scaling is especially problematic in the case of mainstream database management systems initially designed for single-node usage: storage and computational load are not segregated within them, and clusters are maintained by having several full redundant copies of the same data.

Automating this adds even more complexity because your database should be scaled out before the actual load increases and scaled in when the load decreases.

To what extent is it possible to automate horizontal scaling of MongoDB, MySQL, and PostgreSQL databases in Kubernetes,  and how do you do it? Let’s try to find the answers.

Before getting into the practical solution, we should recall some theory:

  • How to scale read and write requests in the reviewed databases, and why should they be scaled separately?
  • What are the possibilities to automate these scales?

Scaling Writes

The nature of databases divides scaling into two very different parts: writes scaling and reads scaling.

Typically, when changing database engine and tuning are out of scope, write requests are not significantly scalable in a relational database – you still need to write data to each node in the cluster. The only way to scale writes horizontally is to use database shards when data is spread across multiple database instances and there is some additional component that redirects queries to needed instances.

If you have to scale writes, but database engine change, tuning, and sharding are not considered, you are limited to vertical scaling of your node resources. Sharding can be implemented in several ways: It can be supported by a specific database management system natively; if there is no native support, there may be a third-party extension for such functionality; finally, you can implement shards manually on the application level, separating portions of data and using different endpoints/connections. Generally, it worth choosing the easiest one among the available ways, if there are no additional considerations.

Autoscaling Databases in Kubernetes

Let us quickly list all the possibilities.

Sharding in MongoDB is the easiest because it is a native feature of this database. For Kubernetes, MongoDB sharding is supported by the Percona Kubernetes Operators in your Percona Distribution for MongoDB environment. You can find information about enabling this feature here.

Sharding in MySQL currently has several implementations:

Sharding in PostgreSQL also can be implemented automatically or manually.

  • You can implement sharding by the Citus PostgreSQL extension  (Citus Data, the company behind it, was acquired by Microsoft in 2019). Here is a blog post about implementing sharded database with it.
  • Also, you can create a sharded database manually following this approach, which combines declarative partitioning and PostgreSQL’s Foreign Data Wrapper.

Scaling Reads

Of course, sharding helps a lot with scaling reads also, but in many cases sharding is not mandatory if you just have to scale reads without scaling writes. Scaling reads can be reached with different caching techniques as well. But it can be done even simpler: you can scale reads by increasing the number of replicas and using reads and writes splitting.

Reads and writes splitting in MongoDB can be done in the following ways:

  • By a database driver if sharding is off,
  • By mongos if sharding is on.

If you use the standard MongoDB driver, you already have splitting (just chose secondary, secondaryPreferred, or nearest read preference instead of the default primary one); you can even control the write concern.

Reads and writes splitting in MySQL can be done as follows:

  • With ProxySQL,
  • On the application level (just use two endpoints, one for writes and one for reads with HAProxy in front of the database).

Reads and writes splitting in PostgreSQL:

  • By a database driver – but not all PostgreSQL drivers support reads and writes split: e.g. libpq (native PostgreSQL driver) supports it, but node-postgres (NodeJS applications driver) does not.
  • If your PostgreSQL driver doesn’t support splitting, you can split reads and writes with Pgpool proxy.
  • You can also make different connections for reads and writes in your application.

Autoscaling Reads

How can we do horizontal scaling for read requests?

When you increase the number of members in your cluster, one member should do a backup, send this backup to a new member, and restore this backup (at least, that’s how databases do it currently in Kubernetes).

For example, you have configured reads-writes splitting, and you have a cluster of two members. At the very moment when you scale it for 3 members, the read performance undergoes a substantial decrease (up to two times when done by Kubernetes Operators, because connections from the donor will be routed out) until you finish the backup saving process. After the backup restore is done, you will get the actual read performance increase.

Note: of course you can tweak connections to gain a less significant performance decrease – but in this case, it will take longer until your database reaches the target performance, which you supposedly need as soon as possible.

That’s why we can’t scale automatically based on exceeding the current productivity of our two members. We should scale the cluster in advance. But you can predict your load in advance only if you know that there will be a peak load in some specific hours, so you can scale-out before this moment.

chart-scaling33.png

So, only scheduled scale-out makes sense.

Horizontal Autoscaling – The Kubernetes Way

If you use a dedicated Operator with your database clusters such as Percona Kubernetes Operator for Percona XtraDB Cluster, scaling can be simplified even more. Let us see an example of doing this for the MySQL cluster managed by the Percona XtraDB Cluster Operator.

You can scale the cluster out and in with the following command:

Shell
kubectl scale --replicas=5 pxc/cluster1

Note: make sure that kubectl scale command is supported by the Operator you are using, if any. For example, Percona XtraDB Cluster Operator supports it only in per-namespace/non-clusterwide mode, starting from version 1.8.0, and some other Operators may lack this feature at all. Also, it is a good idea to refer to your Operators documentation for any scaling-related specifics which may affect autoscaling. For example, both Percona XtraDB Cluster Operator and Percona Operator for Percona Server for MongoDB have allowUnsafeConfigurations option, which should be turned on to prevent the Operator from changing the number of instances to safe defaults (the Operators carefully prevent all potentially unsafe combinations such as odd or even number of instances, etc., and this may conflict with autoscaling).

Finally, if the temporary performance decrease is not a problem in your case, you can try using  Horizontal Pod Autoscaler which will scale your database cluster dynamically, based on some metrics (such as CPU utilization) instead of making scheduled scales. Percona XtraDB Cluster Operator allows you to do it with a config similar to the next one:

Shell
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: cluster1
spec:
  minReplicas: 3
  maxReplicas: 5
  metrics:
  - resource:
      name: cpu
      targetAverageUtilization: 2
    type: Resource
  scaleTargetRef:
    apiVersion: pxc.percona.com/v1
    kind: PerconaXtraDBCluster
    name: cluster1

Conclusion

As we’ve shown, horizontal scaling of traditional databases falls into two very different tasks: scaling reads and scaling writes. The only way to scale writes horizontally is to use database sharding. Horizontal scaling of reads involves read/write splitting. Both sharding and read/write splitting can be done in different ways for the considered databases, and generally, the less native way implementation you chose, the more efforts it may take to bring it up.

As far as horizontal autoscaling is concerned, it is a possible option, but scaling out is associated with a temporary large drop in performance. Therefore, it usually makes sense as a scale-on-schedule related to the planned load change.

Finally, Kubernetes can automate scaling even more, but any database-related Operators in use should be checked for their compatibility with native Kubernetes scaling capabilities.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK