Difference between revisions of "Retain Database Dump Tables Script"
(→Modifying the dumpTables Script) |
|||
Line 11: | Line 11: | ||
====DATABASE CONNECTION AND DUMP OUTPUT LOCATION==== | ====DATABASE CONNECTION AND DUMP OUTPUT LOCATION==== | ||
Replace everything within square brackets, including the square brackets themselves. | Replace everything within square brackets, including the square brackets themselves. | ||
− | + | ::'''Name of Retain database (i.e., retain):''' | |
− | + | ::DBNAMES="[name of Retain database]" | |
− | + | ::Location of the database server (e.g., localhost or IP/DNS hostname): | |
− | + | ::HOST="--host=[localhost or IP/DNS hostname of DB server]" | |
− | + | :: Leave this as is unless the MySQL superuser is something other than "root", which is possible but never seen: | |
− | + | ::USER="--user=root" | |
− | + | ::Password for the root MySQL user: | |
− | + | ::PASSWORD="--password=[root user password]" | |
− | + | ::Self explanatory (e.g., /data/msysql-archive/dump). Whatever is used, MySQL needs to be the owner of the destination directory and have r/w rights: | |
− | + | ::BACKUP_DIR="[path to dump output directory]" | |
====FILE COMPRESSION OPTIONS==== | ====FILE COMPRESSION OPTIONS==== | ||
Enabling either or both of these options disables the easy restore. Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport. You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables# | Enabling either or both of these options disables the easy restore. Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport. You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables# | ||
− | + | ::Valid values: yes / no | |
− | + | ::'''GZip individual tables:''' | |
− | + | ::ZIP="no" | |
− | + | ::'''Tar all the individual files into one single .tar file at the end of the job:''' | |
− | + | ::TAR="no" | |
====PRE-EXISTING FILE CLEANUP===== | ====PRE-EXISTING FILE CLEANUP===== | ||
Cleans up files from a previous run of the dump tables script. This is a dangerous setting as you could lose dump files that you man want to have remain. | Cleans up files from a previous run of the dump tables script. This is a dangerous setting as you could lose dump files that you man want to have remain. | ||
− | + | ::Valid values: yes / no | |
− | + | ::CLEANUP="no" | |
− | ==Executing the dumpTables Script | + | ==Executing the dumpTables Script== |
After editing the settings: | After editing the settings: | ||
* Save the script | * Save the script |
Revision as of 14:57, 8 September 2016
There are occasions that going a regular dump of the database isn't enough when there is an issue. You can do a dump by tables instead.
This script does this for you. It logs into the retain database and dumps each table into a file. The script dumpTables.sh can obtained two ways:
- For Americas support engineers, you can find it in the /mysql directory of the Software Repository.
- For everyone else, click here to download it.
Contents |
Modifying the dumpTables Script
You need to edit the script to make it work for your system. Edit these values in the script to match your system's values. There are three different sections
DATABASE CONNECTION AND DUMP OUTPUT LOCATION
Replace everything within square brackets, including the square brackets themselves.
- Name of Retain database (i.e., retain):
- DBNAMES="[name of Retain database]"
- Location of the database server (e.g., localhost or IP/DNS hostname):
- HOST="--host=[localhost or IP/DNS hostname of DB server]"
- Leave this as is unless the MySQL superuser is something other than "root", which is possible but never seen:
- USER="--user=root"
- Password for the root MySQL user:
- PASSWORD="--password=[root user password]"
- Self explanatory (e.g., /data/msysql-archive/dump). Whatever is used, MySQL needs to be the owner of the destination directory and have r/w rights:
- BACKUP_DIR="[path to dump output directory]"
FILE COMPRESSION OPTIONS
Enabling either or both of these options disables the easy restore. Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport. You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables#
- Valid values: yes / no
- GZip individual tables:
- ZIP="no"
- Tar all the individual files into one single .tar file at the end of the job:
- TAR="no"
PRE-EXISTING FILE CLEANUP=
Cleans up files from a previous run of the dump tables script. This is a dangerous setting as you could lose dump files that you man want to have remain.
- Valid values: yes / no
- CLEANUP="no"
Executing the dumpTables Script
After editing the settings:
- Save the script
- Give the script file execute permissions (chmod +x dumpTables.sh)
- Make MySQL the owner of the destination dump directory: chown mysql:mysql [path to dump directory]
- Ensure that the directory has read and write rights: chmod +r+w [path to dump directory]
- From the directory containing the script, run the script: ./dumpTables.sh
The instructions for importing all of the tables will be saved to the file __restoreData.sql; thus, when the dump has been completed, the only file you need to import is the _restoreData.sql.
Importing the Dump
The steps for importing depend on whether you chose to gzip the table dump files and on whether you chose to tar the entire dump:
- If you enabled the tar option: tar -xf [tar dump file]
- If you enabled the zip option: gzip -d [gz dump file]
- mysql -uroot -p[root pwd] < __restoreData.sql