Amazon Aurora MySQL considerations and restrictions
I really like Aurora service and I used Aurora MySQL extensively in last 4 years. But nothing comes for free, convenience comes with limitations.
Below I'm sharing few limits that I wish I was aware-of from the start.
With Amazon Aurora you are charged for computing, storage, network traffic and backup. Charging is based on the actual usage, you don't need to pre-allocate a specific storage size or IOPs. Although this is very convenient and can save costs, but it can be a double-edged sword.
In Aurora IOs are metered and charged. The only limit for storage bandwidth and IOPs is how much the instance CPU and network bandwidth allows. So if not monitored, increase in storage IOs and associated cost can go un-noticed.
The same applies for storage size, Aurora cluster storage grows automatically. And it should be noticed that storage size can't be shrinked. Deleting old and unused data will not reduce storage costs. The only option is to create a new cluster and manually migrating the needed data to the new cluster and then deleting the old one.
I recommend keeping an eye on your cluster's storage usage, good monitoring and data retention policy from day one is important.
Compatibility with limited MySQL versions
Currently Aurora MySQL is compatible only with MySQL versions 5.6.10a and 5.7.12. I beleive this is due to the extensive customizations AWS did to MySQL. This results on:
- You are out of luck if your application depends on features available only on newer MySQL versions.
- AWS in some cases backport bug fixes added to new releases, but this is slow and usually covers only critical bugs and vulnerabilities. This can be frustrating if your application is impacted by one of the "lower priority" ones. In this case you don't have much options other than depending on AWS to fix the issue.
No shell or filesystem access
This is a side effect of Aurora being a managed service. It impacts some systems that assume full filesystem access, especially those that are using LOAD DATA INFILE statement for data importing. Or some maintenance scripts that run on local MySQL instance.
Usually adapting the application to this limitation is not difficult unless you don't have control on its code. Also with using a managed DB service, your need to run maintenance tasks is reduced to virtual none.
This issue may impact old systems that are not cloud native.
This is a side effect of using a managed service. It can be noisy at the start for DBAs that used to manage their servers. But you can adapt to it. Some examples of noisance:
- You can't use SET GLOBAL ..., or SET @@GLOBAL... syntaxes. The only way to change global variables is through RDS parameter groups (if aws allows you to change this specific parameter).
- Imports of dumps containing views or stored procedures/functions may fail if the definer user is different than the one running the import. You need to manually edit the import script and change or remove the definer.
- You can't
grant all privileges on * . *. Instead you need to specify explicity the privileges excluding super, file and others that aws doesn't allow. This is usually not a bad thing.
Aurora supports only Innodb storage engine for data storage. MyISAM can be used for temporary files, but this is not recommended.
You need to consider that Aurora silently ignores the specified engine for create tables.
Can't directly kill queries or connections of other users
This is a result of limiting super grant. KILL statements will fail for connections owned by other owners. Aurora provides alternative stored procedures to admins: mysql.rds_kill and mysql.rds_kill_query. This can limit some functions in monitoring systems not adapted to Aurora/RDS MySQL.
Another point is that the stored procedures are much slower than the native KILL statements. This can be a real issue if you need to kill large number of connections.
Aurora doesn't support multi-source replication. Also it doesn't support replication filter.
If one of these options is needed, you can use external replication tools like DMS (AWS Database Migration Service).
Hard maximum limits of open files
This can be a problem if your application is using tens of thousands of tables like multi-tenant applications. As of now, Aurora allows only 64k open_files_limit.
open_files_limit = 10 + max_connections + (table_open_cache * 2)
So the maximum number of table_open_cache can only be between 20k and 30k based on max_connections value. This can result in large cache misses and slower queries when very large number of tables is used, even with biggest available instance sizes.
Although Aurora provides very good high availability, but still failures happens and with downtimes.
When the writer instance fails, Aurora automatically fails-over to one of the writer instances. During failover the newly promoted writer restarts to change rule. During the restart it rolls back failed transactions of the old reader during the crash. So the failover time depends on the traffic that was running on the writer.
As a general recommendation, long running and heavy transactions should be prevented.
Less tuning options
The storage engine in Aurora is heavely modified, this makes many InnodB related configuration paramteres not applicable.
Also AWS doesn't allow changing some other paramters.
Usually this is not an issue. But I'd like to specify an example that annoys me. In our custom montiroing tool we use performance_schema tables to collect queries statistics. In MySQL, the default limit of the query text and digests in P_S is 1024 characters, which can be changed by configuration parameters, unfortunatelly these parameters are not configurable in Aurora. This leads to many truncated queries in our reports.
This is natural, Aurora is based on MySQL, so it enhirts MySQL bugs in addition to possibly new bugs added by AWS customizations.
In the previous 4 years I witnessed how Aurora is becoming much more stable and this issue criticallity is reducing fast.