Powered By Blogger

Tuesday, 20 November 2012

taking mysqlbackup using LVM snap shots



for taking mysqlbackup using lvm snapshots:

 we need to follow 5 steps are follows.

1. connect to mysql and give the command in mysql "flush table with read lock" so that all the connections are temporarly halts
 FLUSH LOGS;
 show slave status\G
 show master status\G
 2.) create the LVM snapshots
 3.) mounting the LVM snapshots
 4.) unmounting the lvm snapshots
 5.)removing the snap shots

The detail steps are as follows:

###########step 1 and 2 ###############
/usr/bin/mysql -uroot -proot  <<SQL_EOF
FLUSH TABLES WITH READ LOCK;
FLUSH LOGS;
show slave status\G
show master status\G
SYSTEM /usr/sbin/lvcreate --snapshot --size=5G  --name=backup-test /dev/volumegroup/fs0/
UNLOCK TABLES;
SQL_EOF
##########step 3 #################
#first create the folder
mkdir /var/tmp/mysql-data1
#mouting the snap shot to temp folder we have created above
/bin/mount -o ro /dev/volumegroup/backup-test /var/tmp/mysql-data1
########step 4  unmounting the snap shots ##############
/bin/umount /var/tmp/mysql-data1

##### step 5  removing the snap shots  ################
/usr/sbin/lvremove -f /dev/volumegroup/backup-test

############for more details the url is as follows
https://raw.github.com/nmilford/scripts/master/MySQL/makeMysqlSnapshot.sh
### to know more about LVM snapshot
http://www.howtoforge.com/linux_lvm


Taking dump using mysql Enterprise backup tool (MEB)

####For taking the full backup (we need to create the directory /root/backup for storing the backup files).

bin/mysqlbackup --user=root --password=xxxx --port=3306 --with-timestamp --backup-dir=/root/backup backup

####For taking the first incremental backup  after taking full backup(need to give the path of full backup directory 2012-11-14_01-21-15)

bin/mysqlbackup --user=root --password=xxxx --incremental --incremental-base=dir:/root/backup/2012-11-14_01-21-15  --with-timestamp --incremental-backup-dir=/root/backup backup

#### For taking the second incremental backup  after taking full backup(need to give the path of previous incremental backup directory 2012-11-14_01-21-30)
./mysqlbackup -u root --with-timestamp --incremental --incremental-base=dir:/root/backup/2012-11-15_11-18-20 --incremental-backup-dir=/root/backup backup

#### /mysqlbackup -u root --with-timestamp --incremental --incremental-base=dir:/root/backup/2012-11-15_11-30-48 --incremental-backup-dir=/root/backup backup

For restoring the backup

first stop the mysql and remove the mysql data directory

The commands are as follows:

restoring the full backup-file

### first applying the log
/root/meb/meb-3.8.0-rhel4-x86-32bit/bin/mysqlbackup --backup-dir=/root/backup/2012-11-15_10-49-10 apply-log

## restoring the full backup
/root/meb/meb-3.8.0-rhel4-x86-32bit/bin/mysqlbackup --defaults-file=/home/mysql/instance/inst3306/conf_old/my.cnf --backup-dir=/root/backup/2012-11-15_10-49-10 --datadir=/home/mysql/instance/inst3306/data copy-back

#appending the incremental backups to the full backups

/root/meb/meb-3.8.0-rhel4-x86-32bit/bin/mysqlbackup --incremental-backup-dir=/root/backup/2012-11-15_11-35-25 --backup-dir=/root/backup/2012-11-15_10-49-10 apply-incremental-backup
/root/meb/meb-3.8.0-rhel4-x86-32bit/bin/mysqlbackup --incremental-backup-dir=/root/backup/2012-11-15_11-30-48 --backup-dir=/root/backup/2012-11-15_10-49-10 apply-incremental-backup
/root/meb/meb-3.8.0-rhel4-x86-32bit/bin/mysqlbackup --incremental-backup-dir=/root/backup/2012-11-15_11-18-20 --backup-dir=/root/backup/2012-11-15_10-49-10 apply-incremental-backup

For more information for the Mysql enterprise backup the url is as follows:

Thursday, 19 July 2012

user defined exceptions using signal command

drop table if exists emp;
create table emp (empno int, sal decimal, ename varchar(100));

insert into emp values (1,1000,'A');
insert into emp values (2,2000,'B');
insert into emp values (3,3000,'C');
insert into emp values (4,4000,'D');

DELIMITER $$
drop trigger if exists  maintain_min_sal$$
CREATE  TRIGGER `maintain_min_sal` BEFORE UPDATE ON `emp`
    FOR EACH ROW
   BEGIN
    DECLARE zero_multiply CONDITION FOR SQLSTATE '45000';
    -- DECLARE EXIT HANDLER FOR zero_multiply SET @error = 'salary cannot be decremented';
    IF NEW.sal < OLD.sal THEN
  signal zero_multiply SET message_text = 'salary cannot be decremented';
 END IF;
END$$
DELIMITER ;

mysql> update emp set sal=300 where empno=1;
ERROR 1644 (45000): salary cannot be decremented

Thursday, 7 June 2012

query to find alll the the duplicate records

## the table syntax is as follows:
mysql> show create table house \G
*************************** 1. row ***************************
       Table: house
Create Table: CREATE TABLE `house` (
  `houseno` int(11) DEFAULT NULL,
  `name` varchar(10) DEFAULT NULL,
  `tin_id` int(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`tin_id`),
  KEY `idx_name` (`name`),
  KEY `idx_name1` (`name`,`houseno`),
  KEY `idx_house2` (`houseno`,`name`)
) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
mysql> select * from house;
+---------+------+--------+
| houseno | name | tin_id |
+---------+------+--------+
|       1 | a    |      2 |
|       1 | a    |      4 |
|       2 | a    |      6 |
|       3 | b    |      8 |
|       3 | c    |     22 |
|       3 | c    |     24 |
|       4 | c    |     10 |
|       4 | c    |     12 |
|       4 | c    |     14 |
|       4 | c    |     16 |
|       4 | c    |     18 |
|       4 | c    |     20 |
+---------+------+--------+
12 rows in set (0.00 sec)
mysql> select max(tin_id),min(tin_id),houseno,name from house group by houseno,name having count(*)>
1;
+-------------+-------------+---------+------+
| max(tin_id) | min(tin_id) | houseno | name |
+-------------+-------------+---------+------+
|           4 |           2 |       1 | a    |
|          24 |          22 |       3 | c    |
|          20 |          10 |       4 | c    |
+-------------+-------------+---------+------+
3 rows in set (0.00 sec)

## if we want only tin_id the query is as follows:
SELECT h1.tin_id FROM house h1 WHERE EXISTS (SELECT NULL FROM house h2 WHERE h1.houseno = h2.houseno AND h1.name = h2.name AND h2.tin_id <> h1.tin_id) ;
+--------+
| tin_id |
+--------+
|      2 |
|      4 |
|     22 |
|     24 |
|     10 |
|     12 |
|     14 |
|     16 |
|     18 |
|     20 |
+--------+
10 rows in set (0.00 sec)

## if we want to see the set of all duplicate records the query is as follows:
SELECT h1.* FROM house h1 WHERE EXISTS (SELECT NULL FROM house h2 WHERE h1.houseno = h2.houseno AND h1.name = h2.name AND h2.tin_id <> h1.tin_id);
+---------+------+--------+
| houseno | name | tin_id |
+---------+------+--------+
|       1 | a    |      2 |
|       1 | a    |      4 |
|       3 | c    |     22 |
|       3 | c    |     24 |
|       4 | c    |     10 |
|       4 | c    |     12 |
|       4 | c    |     14 |
|       4 | c    |     16 |
|       4 | c    |     18 |
|       4 | c    |     20 |
+---------+------+--------+
10 rows in set (0.00 sec)
## The other best solution is as follows:

SELECT tin_id
 FROM (SELECT houseno, name
         FROM house
       GROUP BY houseno, name
       HAVING count(*) > 1) temp,
      house h
WHERE temp.houseno = h.houseno AND temp.name = h.name;

Friday, 25 May 2012

extractingindividual table or schema from all backup file in mysql

vi restore_indivudual_table.sh

#!/bin/bash
file=`pwd`
FILE="$file/input.log"
#FILE=/tmp/check/script/input.log
if [ -f $FILE ];
then
   echo "File $FILE exists"
   rm -rf $FILE
else
   echo "File $FILE does not exists"
fi
# Check if there is input to this script
if [ $# -lt 1 ]; then # $# contains the total number of arguments to the script
  echo "Usage: $0 [argument1 2 3...] are mentioned below " # $0 is the executed command
  echo -e " \t\t\t $0 -f <file name> -s <schema> -t <optinally tablename if require>"
  exit 1
fi
while getopts f:hb:s:t: opt
do
   case "$opt" in
      f) profile=$OPTARG
         echo $cell;;
      h) usage;;
      s) info=$OPTARG ;;
      t) table=$OPTARG ;;
      \?) usage;;
   esac
done
rm ${info}.sql ${table}.sql
echo "Profile ------------> $profile ============== INFO----------$info"
grep -n 'CREATE DATABASE' $profile > input.log
val=`cat input.log|grep $info|cut -f1 -d ":"`
initial_value=$val
echo "The initial Value after Regex is $initial_value"
value=`cat input.log|sed -n "/${info}/{n;p;}"|cut -f1 -d ":"`
final_value=`expr $value - 1`
echo "The next Value after Regex is $final_value"
head -18 $profile > ${info}.sql
sed -n "${initial_value},${final_value}p" $profile >> ${info}.sql
tail -11 $profile >> ${info}.sql

if [ -z "${table}" ]
then
echo table is not set at all
else
echo "${table} is for creating table"
grep -n "DROP TABLE" ${info}.sql > table.log
tval=`cat table.log|grep -w $table|cut -f1 -d ":"`
initial_tvalue=$tval
echo "The initial Value after Regex is $initial_tvalue"

#tvalue=`cat table.log|sed -n "/${table}/{n;p;}"|cut -f1 -d ":"`
tvalue=`cat table.log|sed -n "/\<${table}\>/{n;p;}"|cut -f1 -d ":"`
final_tvalue=`expr $tvalue - 1`
echo "The next Value after Regex is $final_tvalue"
sed -n "${initial_tvalue},${final_tvalue}p" ${info}.sql >> ${table}.sql
fi
rm input.log


###steps for execution the above scripts ######
####for extracting individual schema
sh restore_indivudual_table.sh -f /home/user1/backup1.sql -s testschema
######for extracting individual table#######
sh restore_indivudual_table.sh -f /home/user1/backup1.sql -s testschema -t table1

Thursday, 10 May 2012

Change the huge table ( alter table ) with zero down time

Percona Toolskit is the best tool for the archiving this
1. steps for installing
wget http://www.percona.com/redir/downloads/percona-toolkit/2.1.1/percona-toolkit-2.1.1-1.noarch.rpm
2. rpm -ivh percona-toolkit-2.1.1-1.noarch.rpm
3 sometimes the dependencies for downloading are as follows:
wget http://www.percona.com/redir/downloads/percona-toolkit/2.1.1/percona-toolkit-2.1.1-1.noarch.rpm
wget ftp://mirror.switch.ch/pool/1/mirror/scientificlinux/5rolling/x86_64/SL/perl-TermReadKey-2.30-5.el5.x86_64.rpm
wget ftp://mirror.switch.ch/pool/2/mirror/fedora/linux/updates/15/x86_64/perl-DBD-MySQL-4.018-7.fc15.x86_64.rpm

NOTE: PARTITONS WILL NOT WORK IF WE HAVE FOREIGN KEY CONSTRAINTS
3. some of the examples are as follows
### just adding the column
pt-online-schema-change --alter="add column stat1 tinyint "  D=ledger,t=account_balance  --alter-foreign-keys-method='auto'--execute
### justing adding the two column and one column having foreign key
pt-online-schema-change --alter="add column operation_test4 varchar(50),add foreign key xyz4(operation_test4) references operation(type), add status5 varchar(10) default 'yes'"  D=wallet,t=transaction  --alter-foreign-keys-method='drop_swap' --execute
### Altering the ordinary table with partitions
pt-online-schema-change --alter="PARTITION BY RANGE (id) (PARTITION p0 VALUES LESS THAN (1000000),PARTITION p1 VALUES LESS THAN  (5000000),PARTITION p2 VALUES LESS THAN  (10000000),PARTITION p3 VALUES LESS THAN  (50000000),PARTITION p4 VALUES LESS THAN MAXVALUE)" D=wallet,t=transaction_test2 --execute
### this is for prompting password
pt-online-schema-change --alter="PARTITION BY RANGE (id) (PARTITION p0 VALUES LESS THAN (1000000),PARTITION p1 VALUES LESS THAN  (5000000),PARTITION p2 VALUES LESS THAN  (10000000),PARTITION p3 VALUES LESS THAN  (50000000),PARTITION p4 VALUES LESS THAN MAXVALUE)" -u kiran --ask-pass ,D=wallet,t=transaction_test2 --execute
### this is for hostname
pt-online-schema-change --alter="PARTITION BY RANGE (id) (PARTITION p0 VALUES LESS THAN (1000000),PARTITION p1 VALUES LESS THAN  (5000000),PARTITION p2 VALUES LESS THAN  (10000000),PARTITION p3 VALUES LESS THAN  (50000000),PARTITION p4 VALUES LESS THAN MAXVALUE)" -u kiran -h 10.10.8.141 --ask-pass ,D=wallet,t=transaction_test2 --execute
### this is for username and password
NOTE: PARTITONS WILL NOT WORK IF WE HAVE FOREIGN KEY CONSTRAINTS FOR THE TABLE
pt-online-schema-change --alter="PARTITION BY RANGE (id) (PARTITION p0 VALUES LESS THAN (1000000),PARTITION p1 VALUES LESS THAN  (5000000),PARTITION p2 VALUES LESS THAN  (10000000),PARTITION p3 VALUES LESS THAN  (50000000),PARTITION p4 VALUES LESS THAN MAXVALUE)" -u kiran -h 10.10.8.141 -p kiran@123 ,D=wallet,t=transaction_test2 --execute

The disadvanges of this tools is that if we have the trigger already then this tool does not work
The limitations are as follows:
pt-online-schema-change --alter="add column stat1 tinyint "  D=ledger,t=ppacounts_data  --alter-foreign-keys-method='auto'
The table `ledger`.`account_balance` has triggers.  This tool needs to create its own triggers, so the table cannot already have triggers.
[root@sniper kiran.chinta]#  pt-online-schema-change --alter="add column stat1 tinyint "  D=veles,t=ppacounts_data  --alter-foreign-keys-method='auto'
The table `veles`.`ppacounts_data` has triggers.  This tool needs to create its own triggers, so the table cannot already have triggers.
 pt-online-schema-change --alter="add column stat1 tinyint "  D=veles,t=transaction_log  --alter-foreign-keys-method='auto'
The table `veles`.`transaction_log` has triggers.  This tool needs to create its own triggers, so the table cannot already have triggers.

For more information about the tools
http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html

Thursday, 12 April 2012

Avoiding Lock wait time out Issues

when even we are facing the lockwait timeout issues
we need to find who is locking and blocking the thread
The query to find the lock waittime out which mysqlid is locking and blocking the queries
The query is as follows:
SELECT r.trx_id waiting_trx_id,  r.trx_mysql_thread_id waiting_thread,
       r.trx_query waiting_query,
       b.trx_id blocking_trx_id, b.trx_mysql_thread_id blocking_thread,
       b.trx_query blocking_query
FROM       information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b  ON  b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r  ON  r.trx_id = w.requesting_trx_id;

In order to avoid the lock wait time out issue permanantely we have two solutions for this
1. Setting the transaction isolation level to read committed
The command is as follows
set global transaction isolation level read committed;

2. Setting  innodb_locks_unsafe_for_binlog = 1 in my.cnf file and restart mysql
For more inforamtion we can refer the following urls :
http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-information-schema-examples.html
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_locks_unsafe_for_binlog
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29

Thursday, 22 March 2012

keeping the slave always delayed

often DBA wll delete or drop table and will replicate to slave very fast.
In order to avoid that we should always have one slave delayed.
 for that we have two scripts :

########first script############
vi /home/kiran.chinta/scripts/checkslave.sh
#!/bin/sh
MUSER="kiran"
MPASS="kiran"
MHOST="db1"
mysql -u $MUSER -p$MPASS -h $MHOST -Bse "show master status" > /home/kiran.chinta/scripts/slavepos.txt

#### second scripts #############
vi /home/kiran.chinta/scripts/slavedetail.sh
#!/bin/sh
#tail -l /home/kil -l /home/kiran.chinta/slavepos.txt |  while read -r fname bpos
#ran.chinta/slavepos.txt |  while read -r fname bpos
#do
#echo $fname
#echo $bpos
#done
fname=`tr -s ' '  < slavepos.txt | awk -F ' ' '{print $1}'`
bpos=`tr -s ' '  < slavepos.txt | awk -F ' ' '{print $2}'`
user=dba
pass=dba
echo "$fname"
echo "$bpos"
mysql -u $user -p$pass -Bse "START SLAVE IO_THREAD UNTIL MASTER_LOG_FILE='$fname', MASTER_LOG_POS=$bpos"

After creating the two scripts
We should have the following crons

##### give below should run on the cronts #################
#getting the master status
30 02 * * * sh -xf /home/kiran.chinta/scripts/checkslave.sh  > checksucc.txt 2>checkfail.txt
#updating the slave status
30 22 * * * sh -xf /home/kiran.chinta/scripts/slavedetail.sh  > slavedetail_succ.txt 2>slavedetail_fail.txt



Thursday, 16 February 2012

query to find the maximum time based on the each businessurl

create table test1 ( time_in_sec bigint, transid,businessurl);

The table data are as folows:

mysql> select * from tmp_test;
+-------------+---------+-------------+
| time_in_sec | transid | businessurl |
+-------------+---------+-------------+
| 10 | 1 | htt:/test1 |
| 20 | 2 | htt:/test1 |
| 33 | 3 | htt:/test1 |
| 33 | 4 | htt:/org1 |
| 33 | 5 | htt:/org1 |
| 100 | 6 | htt:/org1 |
| 100 | 6 | htt:/org2 |
+-------------+---------+-------------+

query stage1: (we want some of time based on transid and businessurl)

mysql> select sum(time_in_sec), transid,businessurl from tmp_test group by transid,businessurl;
+------------------+---------+-------------+
| sum(time_in_sec) | transid | businessurl |
+------------------+---------+-------------+
| 10 | 1 | htt:/test1 |
| 20 | 2 | htt:/test1 |
| 33 | 3 | htt:/test1 |
| 33 | 4 | htt:/org1 |
| 33 | 5 | htt:/org1 |
| 100 | 6 | htt:/org1 |
| 100 | 6 | htt:/org2 |
+------------------+---------+-------------+
7 rows in set (0.00 sec)


from the above query we want the output of maximum time for each businessurl


the output should be as follows:

33 | 4 | htt:/org1
100 | 6 | htt:/org2


solution:

the above query can be writte in two ways:

1)
select
max(total) total,
cast(substring_index(group_concat(transid ORDER BY total desc), ',',1) as unsigned) transid,
businessurl from
(
select
sum(time_in_sec) total, transid, businessurl
from
tmp_test group by transid,businessurl ) Z
group by businessurl


2)
DROP TEMPORARY TABLE IF EXISTS tmp_table2;
CREATE TEMPORARY TABLE tmp_table2 AS SELECT sum(time_in_sec) total, transid, businessurl FROM tmp_test GROUP BY transid,businessurl;

CREATE INDEX in_business_total
  ON tmp_table2(businessurl, total);

SELECT Z.total,
      Z.businessurl,
      (SELECT min(t2.transid)
         FROM tmp_table2 t2
        WHERE t2.businessurl = Z.businessurl AND t2.total = Z.total)
         transid
 FROM (SELECT max(t.time_in_sec) total, t.businessurl
         FROM tmp_test t
       GROUP BY t.businessurl) Z;

Wednesday, 15 February 2012

installing and configurations using ZRM tool for mysql databases

##### for download and installing zrm recovery tools:
for downloading the zrm-recovery toos
 wget http://www.zmanda.com/downloads/community/ZRM-MySQL/2.2/RPM/MySQL-zrm-2.2.0-1.noarch.rpm
installing the rpm packages as follows:
rpm 0-ivh -zrm-2.2.0-1.noarch.rpm

check for rpm packages
 ls -lh /usr/bin/mysql-zrm*
-rwxr-x--- 1 mysql mysql 3.5K Feb  4  2010 /usr/bin/mysql-zrm
-rwxr-x--- 1 mysql mysql 1.8K Feb  4  2010 /usr/bin/mysql-zrm-abort-backup
-rwxr-x--- 1 mysql mysql  70K Feb  4  2010 /usr/bin/mysql-zrm-backup
-rwxr-x--- 1 mysql mysql 1.6K Feb  4  2010 /usr/bin/mysql-zrm-check
-rwxr-x--- 1 mysql mysql 2.9K Feb  4  2010 /usr/bin/mysql-zrm-extract-backup
-rwxr-x--- 1 mysql mysql 1.8K Feb  4  2010 /usr/bin/mysql-zrm-getconf
-rwxr-x--- 1 mysql mysql 4.4K Feb  4  2010 /usr/bin/mysql-zrm-list
-rwxr-x--- 1 mysql mysql  12K Feb  4  2010 /usr/bin/mysql-zrm-manage-backup
-rwxr-x--- 1 mysql mysql 2.7K Feb  4  2010 /usr/bin/mysql-zrm-migrate-file-ownership
-rwxr-x--- 1 mysql mysql  13K Feb  4  2010 /usr/bin/mysql-zrm-parse-binlogs
-rwxr-x--- 1 mysql mysql 4.9K Feb  4  2010 /usr/bin/mysql-zrm-purge
-rwxr-x--- 1 mysql mysql  29K Feb  4  2010 /usr/bin/mysql-zrm-reporter
-rwxr-x--- 1 mysql mysql  31K Feb  4  2010 /usr/bin/mysql-zrm-restore
-rwxr-x--- 1 mysql mysql  13K Feb  4  2010 /usr/bin/mysql-zrm-scheduler
-rwxr-x--- 1 mysql mysql 8.1K Feb  4  2010 /usr/bin/mysql-zrm-verify-backup
The configuration files are located on
ls -lh /etc/mysql-zrm/
-rwxrwx--- 1 mysql mysql  12K Feb  4  2010 mysql-zrm.conf
-rwxrwx--- 1 mysql mysql   48 Feb  4  2010 mysql-zrm-release
-rwxrwx--- 1 mysql mysql 3.2K Feb  4  2010 mysql-zrm-reporter.conf
-rw-rw---- 1 mysql mysql  596 Feb  4  2010 RSS.header

After sucessfull configuration and installation
we want to take the full backup using ZRM Tools:
cd /etc/mysql-zrm
mkdir fullbackup
cp mysql-zrm.conf fullbackup/
cd fullbackup
ls -lh
-rwxr-x--- 1 mysql mysq 9.0k  Feb  4  2010 mysql-zrm.conf
change the configuration file on mysql-zrm.conf
The changes are as follows:
backup-level=0
backup-mode=logical
user="user1"
password="user1"
destination=/var/lib/mysql-zrm

steps for taking the backup (full backup)
mysql-zrm-scheduler --now --backup-set fullbackup

for more options for mysq-zrm are as follows:
mysql-zrm --action --help
USAGE:
--action <backup|restore|schedule-backup|report|list|getconf|setconf|purge|parse-binlogs|check|manage-backup|verify-backup|extract-backup|abort-backup>
Help on individual action items can be obtained by specifying --help with the required action.
For example: 'mysql-zrm --action backup --help'
mysql-zrm --action --help

1) restoring the full backup
mysql-zrm --action restore --backup-set fullbackup --source-directory /var/lib/mysql-zrm/fullbackup/20120215114946/

2) steps for restoing the individual database from full backup.
first go to the index file
then change the logical-databses (db1 db2) etc
vi index
backup-set=fullbackup
backup-date=20120215114946
mysql-server-os=Linux/Unix
backup-type=regular
host=localhost
backup-date-epoch=1329286786
mysql-zrm-version=ZRM for MySQL Community Edition - version 2.2.0
mysql-version=5.5.19-log
backup-directory=/var/lib/mysql-zrm/fullbackup/20120215114946
backup-level=0
logical-databases=db1 db2
next-binlog=mysql-bin.000004
backup-size=478.09 MB
read-locks-time=00:00:19
flush-logs-time=00:00:00
backup-time=00:00:19
backup-status=Backup succeeded
mysql-zrm --action restore --backup-set fullbackup --source-directory /var/lib/mysql-zrm/fullbackup/20120215114946/ --databases "db1"

More about the zrm utilities:
The url  is as follows:
http://www.zmanda.com/quick-mysql-backup.html

Monday, 9 January 2012

Query to find the latest address details based on created dated ( one person can have multiple addresses)

The person table structure is as follows:

CREATE TABLE person
(
  id         int(11) AUTO_INCREMENT,
  fname      varchar(100),
  lastname   varchar(100),
  PRIMARY KEY(id)
);
The address table structure is as follows:

CREATE TABLE address(id int(11) AUTO_INCREMENT,  person_id int(11), city varchar(100), state varchar(100), zip varchar(10), created_date datetime, PRIMARY KEY(id), CONSTRAINT  fk_person_id FOREIGN KEY (person_id) REFERENCES person(id) , KEY(person_id, created_date));

to increase the performance we can create the index as follows:
create index idx1 on address (person_id, created_date)

The query is as follows:

SELECT *,
      city,
      state,
      zip,
      created_date
 FROM (SELECT p.*,
              (SELECT id
                 FROM address
                WHERE person_id = p.id
               ORDER BY person_id DESC, created_date DESC
                LIMIT 1)
                 latest_adr_id
         FROM person p
   ) Z,
      address adr
WHERE Z.latest_adr_id = adr.id;