Notes on Importing mysqldump

Background: I have multiple 200GB+ MySQL dumps to import. It takes a lot of time.

  1. pv shows you the progress so you have the idea how long will the import take.
  2. Pipe the dump.sql.gz file to gunzip instead of unzipping it then import. Saves much disk read and write, not to mention the time and effort. e.g. pv dump.sql.gz | gunzip -c | mysql
  3. grep is slow for a large file. Consider LC_ALL=C fgrep. Thanks to this SO thread. Also consider the -b / --byte-offset option to show the byte offset which may be useful.
  4. To skip the first N bytes (e.g. 100) and get the remaining bytes from file, use: { dd bs=100 skip=1 count=0; cat; } <dump.sql. To get the first N bytes from file, use head -c 100 dump.sql.
  5. To speed up editing the file, split the file and merge it instead of editing it in-place using text editor. Use cat a b > c to merge files.
  6. Common sense: use less instead of vim to view large files.
  7. Applying some of the tricks above: pv dump.sql | LC_ALL=C fgrep -b 'Current Database'. This helps you find different databases from the dump so you may edit the schema or loosen the constraints.