MySQL server is getting out of disk space
I saw this impacting many production servers, and the common cause was one thing, the unconsidered binlog files growth.
The SaaSOps guy did his best to not impact the production MySQL server while importing a relatively big schema into the server. The source schema size was calculated and he validated that the target server had more than enough space to accommodate the new data. But the storage space was exhausted and server crashed in the middle of the import.
In another case, the product team was stumped with their MySQL server lossing its free storage space with an unpreceding rate. The only racent change was some new heavy update statements, but they didn't link it with the fast rate of used storage increase as there was no new inserts.
If enabled, MySQL stores all changes to the database in binlog files. MySQL supports the following 3 formats for binlog:
- Statement: In this format the executed statements' text is recorded. This is the most compact and efficient format, but it has some limitations.
- Row: The changes per row are recorded. You can imagine how big the generated files can be with updates or deletes impacting many rows.
- Mixed: it is a compination of the previous two. By default MySQL engine will use statement format except some with cases as listed in https://dev.mysql.com/doc/refman/8.0/en/binary-log-mixed.html
The combination of binlog and heavy writes impacts the whole server mainly as following:
- Extra storage space usage and disk IOs.
- Increase of the replication lag incase a legacy mysql replication is used.
- Large write commits blocks all other DML statements till all changes of the commit are written in binlog files.
Reducing the binlog retention period can help the storage space issue, but will not resolve the other issues.
Another option is to disable binlog temporary during the heavy write operation and rebuild the replicas, or manually sync them after that. This needs a careful analysis and plaining.