Notes on Importing mysqldump
2/Feb 2019Background: I have multiple 200GB+ MySQL dumps to import. It takes a lot of time.
pv
shows you the progress so you have the idea how long will the import take.- Pipe the
dump.sql.gz
file togunzip
instead of unzipping it then import. Saves much disk read and write, not to mention the time and effort. e.g.pv dump.sql.gz | gunzip -c | mysql
grep
is slow for a large file. ConsiderLC_ALL=C fgrep
. Thanks to this SO thread. Also consider the-b / --byte-offset
option to show the byte offset which may be useful.- To skip the first N bytes (e.g. 100) and get the remaining bytes from file, use:
{ dd bs=100 skip=1 count=0; cat; } <dump.sql
. To get the first N bytes from file, usehead -c 100 dump.sql
. - To speed up editing the file, split the file and merge it instead of editing it in-place using text editor. Use
cat a b > c
to merge files. - Common sense: use
less
instead ofvim
to view large files. - Applying some of the tricks above:
pv dump.sql | LC_ALL=C fgrep -b 'Current Database'
. This helps you find different databases from the dump so you may edit the schema or loosen the constraints.
More Reading