Difference between revisions of "Retain Database Dump Tables Script"

From GWAVA Technologies Training
Jump to: navigation, search
(Editing dumpTables)
 
(13 intermediate revisions by 2 users not shown)
Line 2: Line 2:
  
 
This script does this for you. It logs into the retain database and dumps each table into a file. The script dumpTables.sh can obtained two ways:
 
This script does this for you. It logs into the retain database and dumps each table into a file. The script dumpTables.sh can obtained two ways:
* For Americas support engineers, you can find it in the '''/mysql''' directory of the Software Repository.
+
::* For Americas support engineers, you can find it in the '''/mysql''' directory of the '''Software Repository'''.
* For everyone else, click here to download it.
+
::* For everyone else, copy the contents of the script at the bottom of this page into a text editor and save it.
  
==Editing dumpTables==
 
  
You need to edit the script to make it work for your system. Edit these values in the script to match your system's values:
+
==Modifying the dumpTables Script==
  
Change the values below where needed.....
+
You need to edit the script to make it work for your system. Edit these values in the script to match your system's values.  There are three different sections
*DBNAMES="'''retain'''" (the database name)
+
*HOST="--host='''[localhost or IP/DNS hostname of DB server]'''" (the server the database is on)
+
*USER="--user='''root'''" (the root username)
+
*PASSWORD="--password='''[password]'''" (the root user password)
+
*BACKUP_DIR="'''[path to destination dump directory]'''" (the directory to save the table files to)
+
  
Save the script, give it execute permissions ('''chmod +x dumpTables.sh''') and run it ('''./dumpTables.sh''')
+
====DATABASE CONNECTION AND DUMP OUTPUT LOCATION====
 +
Replace everything within square brackets, including the square brackets themselves.
 +
::'''Name of Retain database (i.e., retain):'''
 +
::DBNAMES="[name of Retain database]"
 +
::'''Location of the database server (e.g., localhost or IP/DNS hostname):'''
 +
::HOST="--host=[localhost or IP/DNS hostname of DB server]"
 +
::'''Database user.''' Leave this as is unless the MySQL superuser is something other than "root", which is possible but never seen:
 +
::USER="--user=root"
 +
::'''Password for the root MySQL user:'''
 +
::PASSWORD="--password=[root user password]"
 +
::'''Dump directory.'''  Self explanatory (e.g., /data/msysql-archive/dump). Whatever is used, MySQL needs to be the owner of the destination directory and have r/w rights:
 +
::BACKUP_DIR="[path to dump output directory]"
  
The tables will be saved to the file '''__restoreData.sql'''
+
====FILE COMPRESSION OPTIONS====
 +
Enabling either or both of these options disables the easy restore.  Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport.  You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables#
 +
 +
::Valid values: yes / no
 +
 +
::'''GZip individual tables:'''
 +
::ZIP="no"
  
==Restoring tables==
+
::'''Tar all the individual files into one single .tar file at the end of the job:'''
To Restore a Database:
+
::TAR="no"
#gzip -d '''[gz dump file]'''
+
#tar -xf '''[tar dump file]'''
+
#mysql -uroot -p'''[root pwd]''' < __restoreData.sql
+
  
<!--
+
====PRE-EXISTING FILE CLEANUP====
==dumpTables Script listing==
+
Cleans up files from a previous run of the dump tables script.  This is a dangerous setting as you could lose dump files that you man want to have remain.
 +
 +
::Valid values: yes / no
 +
 +
::CLEANUP="no"
 +
 
 +
==Executing the dumpTables Script==
 +
After editing the settings:
 +
::* Save the script
 +
::* Give the script file execute permissions ('''chmod +x dumpTables.sh''')
 +
::* Make MySQL the owner of the destination dump directory: chown mysql:mysql [path to dump directory]
 +
::* Ensure that the directory has read and write rights:  chmod +r+w [path to dump directory]
 +
::* From the directory containing the script, run the script: '''./dumpTables.sh'''
 +
 
 +
The instructions for importing all of the tables will be saved to the file '''__restoreData.sql'''; thus, when the dump has been completed, the only file you need to import is the '''_restoreData.sql'''.
 +
 
 +
==Importing the Dump==
 +
The steps for importing depend on whether you chose to:
 +
::*Tar the entire dump
 +
::*Gzip the table dump files
 +
 
 +
#If you enabled the tar option: tar -xf '''[tar dump file]'''
 +
#If you enabled the zip option: gzip -d '''[gz dump file]'''
 +
 
 +
For all situations:
 +
 
 +
::mysql -uroot -p'''[root pwd]''' < '''__restoreData.sql'''
 +
 
 +
The '''_restoreData.sql''' file is the only file you need to manually import.  It contains instructions for importing all of the other table dump files automatically.
 +
 
 +
==Script Listing==
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash
  
### change the values below where needed.....
+
# Change the values below where needed. Most of the time, you'll only be changing the "database connection and dump output location" and sometimes the "file compression options" and the "pre-existing file cleanup".  The "InnoDB dump options" should probably remain as written unless you really know what you are talking about.  The "restore script setup" should always remain left untouched.
DBNAMES="retain"
+
HOST="--host=localhost"
+
USER="--user=root"
+
PASSWORD="--password=k<l1nux"
+
BACKUP_DIR="/var/lib/mysql/dump"
+
  
#### you can change these values, they are optional but know what you're doing....
+
#DATABASE CONNECTION AND DUMP OUTPUT LOCATION
#-- InnoDB --##
+
#Replace everything within square brackets, including the square brackets themselves.
OPTIONS="--default-character-set=utf8 --complete-insert --extended-insert --quick --opt --single-transaction"
+
## Name of Retain database (i.e., retain):
#-- MyISAM --##
+
DBNAMES="[name of Retain database]"
#OPTIONS="--default-character-set=utf8 --complete-insert --extended-insert --quick --opt --add-locks"
+
## Location of the database server (e.g., localhost or IP/DNS hostname):
 +
HOST="--host=[localhost or IP/DNS hostname of DB server]"
 +
## Leave this as is unless the MySQL superuser is something other than "root", which is possible but never seen:
 +
USER="--user=root"  
 +
## Password for the root MySQL user:
 +
PASSWORD="--password=[root user password]"
 +
## Self explanatory (e.g., /data/msysql-archive/dump). Whatever is used, MySQL needs to be the owner of the destination directory and have r/w rights:
 +
BACKUP_DIR="[path to dump output directory]"  
  
RESTORESCRIPT="$BACKUP_DIR/__restoreData.sql"
+
#FILE COMPRESSION OPTIONS
DATE=`/bin/date '+%y%m%d_%H%M%S'`
+
#Enabling either or both of these options disables the easy restore.  Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport.  You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables#
TAR="yes"  
+
 +
#-- Valid values: yes / no --#
 +
 +
##GZip individual tables:
 +
ZIP="no"
 +
 
 +
##Tar all the individual files into one single .tar file at the end of the job:
 +
TAR="no"
 +
 
 +
#PRE-EXISTING FILE CLEANUP
 +
#Cleans up files from a previous run of the dump tables script.  This is a dangerous setting as you could lose dump files that you man want to have remain.
 +
 +
#-- Valid values: yes / no --#
 +
 +
CLEANUP="no"
 +
 +
#INNODB DUMP OPTIONS
 +
# You can change the following InnoDB options. They are optional but know what you're doing.
 +
 
 +
#-- Note that the "--single-transaction" option is helpful for when Tomcat is left running.  If Tomcat is left down, then it isn't necessary.  Reference https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_single-transaction. --#
 +
 +
OPTIONS="--default-character-set=utf8 --complete-insert --extended-insert --quick --opt --single-transaction"
  
#-- GZip Individual Tables --#
+
#RESTORE SCRIPT SETUP
#-- Turning this on disables the easy restore --#
+
#No need to edit this section:
#-- Values: yes/no --#
+
RESTORESCRIPT="$BACKUP_DIR/__restoreData.sql"
ZIP="no"
+
DATE=`/bin/date '+%y%m%d_%H%M%S'`
  
  
#### make no changes after this....
+
#---------------------------- BEGINNING OF SCRIPT - make no changes after this point ------------------------------------------
  
 
date2stamp () {
 
date2stamp () {
Line 80: Line 141:
  
  
#### start script ####
+
#### Remove any files existing from a previous run of the dump tables script ####
echo Removing old temporary files if they exist...
+
if [ $CLEANUP == "yes" ]
rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
+
then
rm -f ${BACKUP_DIR}/*.gz > /dev/null 2>&1
+
echo Removing old files if they exist...
rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1
+
rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
cd ${BACKUP_DIR}
+
rm -f ${BACKUP_DIR}/*.gz > /dev/null 2>&1
 +
rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1
 +
cd ${BACKUP_DIR}
 +
fi
 +
 
 +
 
 +
#### Begin dump ####
  
 
TOTBEGIN=$(date2stamp now)
 
TOTBEGIN=$(date2stamp now)
Line 96: Line 163:
 
     echo 'SET UNIQUE_CHECKS=0;' >> $RESTORESCRIPT
 
     echo 'SET UNIQUE_CHECKS=0;' >> $RESTORESCRIPT
 
     echo 'SET FOREIGN_KEY_CHECKS=0;' >> $RESTORESCRIPT
 
     echo 'SET FOREIGN_KEY_CHECKS=0;' >> $RESTORESCRIPT
 +
 
###Ernie Research This:innodb_autoinc_lock_mode = 2
 
###Ernie Research This:innodb_autoinc_lock_mode = 2
  
Line 127: Line 195:
  
 
###Ernie Research This:innodb_autoinc_lock_mode = 1
 
###Ernie Research This:innodb_autoinc_lock_mode = 1
 +
 
     echo 'SET FOREIGN_KEY_CHECKS=1;' >> $RESTORESCRIPT
 
     echo 'SET FOREIGN_KEY_CHECKS=1;' >> $RESTORESCRIPT
 
     echo 'SET UNIQUE_CHECKS=1;' >> $RESTORESCRIPT
 
     echo 'SET UNIQUE_CHECKS=1;' >> $RESTORESCRIPT
Line 157: Line 226:
 
echo "=========================================="
 
echo "=========================================="
 
echo "      All Databases Complete"
 
echo "      All Databases Complete"
echo "      To Restore a Database:"
+
echo "      To import these dump files:"
echo "        1. gzip -d [gz dump file]"
+
if [ $TAR == "yes" ]
echo "        2. tar -xf [tar dump file]"  
+
then
echo "        3. mysql -uroot -p[root pwd] < __restoreData.sql"
+
echo "        * tar -xf [tar filename]"  
 +
fi
 +
if [ $ZIP == "yes" ]
 +
then
 +
echo "        * gzip -d [*.gz]"
 +
fi
 +
 
 +
echo "        * Import the _restoreData.sql, which will import all the other files: mysql -uroot -p[root pwd] < _restoreData.sql"
 
echo "=========================================="
 
echo "=========================================="
  
 
</pre>
 
</pre>
-->
 

Latest revision as of 15:25, 8 September 2016

There are occasions that going a regular dump of the database isn't enough when there is an issue. You can do a dump by tables instead.

This script does this for you. It logs into the retain database and dumps each table into a file. The script dumpTables.sh can obtained two ways:

  • For Americas support engineers, you can find it in the /mysql directory of the Software Repository.
  • For everyone else, copy the contents of the script at the bottom of this page into a text editor and save it.


Contents

[edit] Modifying the dumpTables Script

You need to edit the script to make it work for your system. Edit these values in the script to match your system's values. There are three different sections

[edit] DATABASE CONNECTION AND DUMP OUTPUT LOCATION

Replace everything within square brackets, including the square brackets themselves.

Name of Retain database (i.e., retain):
DBNAMES="[name of Retain database]"
Location of the database server (e.g., localhost or IP/DNS hostname):
HOST="--host=[localhost or IP/DNS hostname of DB server]"
Database user. Leave this as is unless the MySQL superuser is something other than "root", which is possible but never seen:
USER="--user=root"
Password for the root MySQL user:
PASSWORD="--password=[root user password]"
Dump directory. Self explanatory (e.g., /data/msysql-archive/dump). Whatever is used, MySQL needs to be the owner of the destination directory and have r/w rights:
BACKUP_DIR="[path to dump output directory]"

[edit] FILE COMPRESSION OPTIONS

Enabling either or both of these options disables the easy restore. Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport. You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables#

Valid values: yes / no
GZip individual tables:
ZIP="no"
Tar all the individual files into one single .tar file at the end of the job:
TAR="no"

[edit] PRE-EXISTING FILE CLEANUP

Cleans up files from a previous run of the dump tables script. This is a dangerous setting as you could lose dump files that you man want to have remain.

Valid values: yes / no
CLEANUP="no"

[edit] Executing the dumpTables Script

After editing the settings:

  • Save the script
  • Give the script file execute permissions (chmod +x dumpTables.sh)
  • Make MySQL the owner of the destination dump directory: chown mysql:mysql [path to dump directory]
  • Ensure that the directory has read and write rights: chmod +r+w [path to dump directory]
  • From the directory containing the script, run the script: ./dumpTables.sh

The instructions for importing all of the tables will be saved to the file __restoreData.sql; thus, when the dump has been completed, the only file you need to import is the _restoreData.sql.

[edit] Importing the Dump

The steps for importing depend on whether you chose to:

  • Tar the entire dump
  • Gzip the table dump files
  1. If you enabled the tar option: tar -xf [tar dump file]
  2. If you enabled the zip option: gzip -d [gz dump file]

For all situations:

mysql -uroot -p[root pwd] < __restoreData.sql

The _restoreData.sql file is the only file you need to manually import. It contains instructions for importing all of the other table dump files automatically.

[edit] Script Listing

#!/bin/bash

# Change the values below where needed. Most of the time, you'll only be changing the "database connection and dump output location" and sometimes the "file compression options" and the "pre-existing file cleanup".  The "InnoDB dump options" should probably remain as written unless you really know what you are talking about.  The "restore script setup" should always remain left untouched.

#DATABASE CONNECTION AND DUMP OUTPUT LOCATION 
#Replace everything within square brackets, including the square brackets themselves.
	## Name of Retain database (i.e., retain):
	DBNAMES="[name of Retain database]" 
	## Location of the database server (e.g., localhost or IP/DNS hostname):
	HOST="--host=[localhost or IP/DNS hostname of DB server]"
	## Leave this as is unless the MySQL superuser is something other than "root", which is possible but never seen:
	USER="--user=root" 
	## Password for the root MySQL user:
	PASSWORD="--password=[root user password]" 
	## Self explanatory (e.g., /data/msysql-archive/dump). Whatever is used, MySQL needs to be the owner of the destination directory and have r/w rights:
	BACKUP_DIR="[path to dump output directory]" 

#FILE COMPRESSION OPTIONS
#Enabling either or both of these options disables the easy restore.  Rarely would you have a need for the tar option; however, if a customer is transferring their database dump to GWAVA's datacenter or to some other location within their environment, using the GZip option would reduce the size of the file and protect it during transport.  You could use a script for copying the GZip files to the new location as they are created so that they can immediately be imported into the new system while the dump process continues to process more tables#
	
	#-- Valid values: yes / no --#
	
	##GZip individual tables:
	ZIP="no"

	##Tar all the individual files into one single .tar file at the end of the job:
	TAR="no" 

#PRE-EXISTING FILE CLEANUP
#Cleans up files from a previous run of the dump tables script.  This is a dangerous setting as you could lose dump files that you man want to have remain.
	
	#-- Valid values: yes / no --#
	
	CLEANUP="no"
	
#INNODB DUMP OPTIONS
# You can change the following InnoDB options. They are optional but know what you're doing.

	#-- Note that the "--single-transaction" option is helpful for when Tomcat is left running.  If Tomcat is left down, then it isn't necessary.  Reference https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_single-transaction. --#
	
	OPTIONS="--default-character-set=utf8 --complete-insert --extended-insert --quick --opt --single-transaction"

#RESTORE SCRIPT SETUP
#No need to edit this section:
	RESTORESCRIPT="$BACKUP_DIR/__restoreData.sql"
	DATE=`/bin/date '+%y%m%d_%H%M%S'`


#---------------------------- BEGINNING OF SCRIPT - make no changes after this point ------------------------------------------

date2stamp () {
    date --utc --date "$1" +%s
}

stamp2date (){
    date --utc --date "1970-01-01 $1 sec" "+%Y-%m-%d %T"
}

dateDiff (){
    case $1 in
        -s)   sec=1;      shift;;
        -m)   sec=60;     shift;;
        -h)   sec=3600;   shift;;
        -d)   sec=86400;  shift;;
        *)    sec=86400;;
    esac
    dte1=$(date2stamp $1)
    dte2=$(date2stamp $2)
    diffSec=$((dte2-dte1))
    if ((diffSec < 0)); then abs=-1; else abs=1; fi
    echo $((diffSec/sec*abs))
}


#### Remove any files existing from a previous run of the dump tables script ####
if [ $CLEANUP == "yes" ]
then 
	echo Removing old files if they exist...
	rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
	rm -f ${BACKUP_DIR}/*.gz > /dev/null 2>&1
	rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1
	cd ${BACKUP_DIR}
fi


#### Begin dump ####

TOTBEGIN=$(date2stamp now)
for DB in $DBNAMES
do
    echo "=========================================="
    echo    Dumping Database: ${DB}
    echo "=========================================="
    echo 'SET AUTOCOMMIT=0;' > $RESTORESCRIPT
    echo 'SET UNIQUE_CHECKS=0;' >> $RESTORESCRIPT
    echo 'SET FOREIGN_KEY_CHECKS=0;' >> $RESTORESCRIPT

###Ernie Research This:innodb_autoinc_lock_mode = 2

    mysqldump --databases --no-data $HOST $USER $PASSWORD $DB > ${BACKUP_DIR}/__createDatabase.sql
    echo 'source __createDatabase.sql;' >> $RESTORESCRIPT

    for TABLE in `mysql $HOST $USER $PASSWORD $DB -e 'show tables' | egrep -v 'Tables_in_' `; do
        TABLENAME=$(echo $TABLE|awk '{ printf "%s", $0 }')

	 if [ $ZIP == "yes" ]
          then FILENAME="${TABLENAME}.sql.gz"
	   else FILENAME="${TABLENAME}.sql"
	 fi

        echo Dumping Table: $TABLENAME
	 BEGIN=$(date2stamp now)
        echo 'source' $FILENAME';' >> $RESTORESCRIPT

	 if [ $ZIP == "yes" ]
          then mysqldump $OPTIONS $HOST $USER $PASSWORD $DB $TABLENAME | gzip > ${BACKUP_DIR}/${FILENAME}
          else mysqldump $OPTIONS $HOST $USER $PASSWORD $DB $TABLENAME > ${BACKUP_DIR}/${FILENAME}
	 fi
	
	 END=$(date2stamp now)
	 DIFF=$((END-BEGIN))
	 if ((DIFF < 300)); 
	  then echo "  " $((DIFF/1)) Seconds;
	  else echo "  " $((DIFF/60)) Minutes;
	 fi
    done

###Ernie Research This:innodb_autoinc_lock_mode = 1

    echo 'SET FOREIGN_KEY_CHECKS=1;' >> $RESTORESCRIPT
    echo 'SET UNIQUE_CHECKS=1;' >> $RESTORESCRIPT
    echo 'COMMIT;' >> $RESTORESCRIPT


    if [ $TAR == "yes" ] 
    then
      echo Making tar...
      tar -cf ${DB}_${DATE}.tar *.sql  > /dev/null 2>&1

      echo Compressing...
      gzip -9 ${DB}_${DATE}.tar > /dev/null 2>&1

      echo Removing temporary files...
      rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
      rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1
    fi

    echo "Completed: " $DB
done
TOTEND=$(date2stamp now)

DIFF=$((TOTEND-TOTBEGIN))
 if ((DIFF < 300)); 
  then echo "  " $((DIFF/1)) Seconds;
  else echo "  " $((DIFF/60)) Minutes;
 fi

echo "=========================================="
echo "       All Databases Complete"
echo "       To import these dump files:"
	if [ $TAR == "yes" ] 
	then
echo "         * tar -xf [tar filename]" 
	fi
	if [ $ZIP == "yes" ]
	then 
echo "         * gzip -d [*.gz]"
	fi

echo "         * Import the _restoreData.sql, which will import all the other files: mysql -uroot -p[root pwd] < _restoreData.sql"
echo "=========================================="

Personal tools
Namespaces

Variants
Actions
Home
Exchange
GroupWise
JAVA
Linux
MTK
Retain
GW Monitoring and Reporting (Redline)
GW Disaster Recovery (Reload)
GW Forensics (Reveal)
GWAVA
Secure Messaging Gateway
GW Mailbox Management (Vertigo)
Windows
Other
User Experience
Toolbox
Languages
Toolbox