10/Jun 2021
6 min. read
Background This is the 2nd part of the Fixing Slow MySQL series. If you haven’t seen the 1st part, here it is: Fixing Slow MySQL #1: Foreign Key & Row Lock Contention.
Despite the successful fix in part 1, the database is still slow and the application goes down as load goes up, but the symptoms are different from the previous time. It means we hit another bottleneck.
Database Schema Here’s a minimal database schema to illustrate the issue.28/May 2021
4 min. read
Background There is service downtime due to “Slow MySQL”. But why is it slow? And how to fix it? Let’s start digging.
This post is the first part of the series.
Database Schema Here’s a minimal database schema to illustrate the issue.
CREATE TABLE parent ( id INT NOT NULL AUTO_INCREMENT, child_count INT NOT NULL, PRIMARY KEY (id) ); CREATE TABLE child ( id INT NOT NULL AUTO_INCREMENT, parent_id INT, FOREIGN KEY (parent_id) REFERENCES parent(id) ON DELETE CASCADE, PRIMARY KEY (id) ); Add example data to database.19/Mar 2019
5 min. read
Background My gevent application hangs after issuing a query that yields 1000+ rows and then a COMMIT. After fixing mysqlclient (see Python mysqlclient Doesn’t Work Well with gevent), the COMMIT does not hang the whole application, but it still hangs the greenlet that issues the SQL query.
Setup: Ubuntu 16.04, ProxySQL v1.4.13, Amazon RDS MySQL 5.6 (ProxySQL backend), Python 2.7.12 application running on the same machine as ProxySQL
Symptoms It is always reproducible with the specific query, the backend, and on that machine.20/Feb 2019
2 min. read
Update 21 Mar 2019 There’s a new post Fixing ProxySQL Idle Threads Epoll Hang Heisenbug about fixing a bug of ProxySQL when --idle-threads is enabled.
Problem ProxySQL does not scale with high numbers of connections by default. When there are tens of thousands of client connection to ProxySQL, the CPU usage of ProxySQL may go up to 300%+ because by default it uses 4 threads. Even with 3000 mostly idle connections, it hits ~70% CPU usage.2/Feb 2019
1 min. read
Background: I have multiple 200GB+ MySQL dumps to import. It takes a lot of time.
pv shows you the progress so you have the idea how long will the import take. Pipe the dump.sql.gz file to gunzip instead of unzipping it then import. Saves much disk read and write, not to mention the time and effort. e.g. pv dump.sql.gz | gunzip -c | mysql grep is slow for a large file.10/Dec 2018
3 min. read
I have recently migrated to Amazon RDS from MySQL on Azure VM.
This percona article about migrating to RDS is actually very helpful. I have a few things to add though.
Setup My setup is a self-hosted MySQL 5.6 server in VM on Azure and a new Amazon RDS for MySQL 5.6.
AWS DMS I have tried AWS Database Migration Service (DMS). It is reasonably fast. The main problem is it does not look like mature technology at all.