MySQL Backups using LVM Snapshots

时间:2019-02-19
本文章向大家介绍MySQL Backups using LVM Snapshots,主要包括MySQL Backups using LVM Snapshots使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。
                原贴:http://mike.kruckenberg.com/archives/2006/05/mysql_backups_u.html

May 4, 2006

MySQL Backups using LVM Snapshots

The second thing on my list coming away from the MySQL Users Conference (first was a question about limits in mysqldump) was to do a writeup on how to use Linux LVM (Logical Volume Manager) to take quick snapshots of your database. Many folks are using this as an alternative to the tools provided with MySQL. It's a good choice for a backup when you need to backup in order to restore the entire database from files.

Taneli has already done a write-up of the steps to complete this process, I want to add in a few things:

LVM not a True Online Backup

LVM doesn't do a true online backup, although it gets close. The actual LVM snapshot command is fast, but to get a good copy of your data all of your MySQL tables must be read locked to not allow any writes during the snapshot process. When you ask MySQL to flush all tables with read locks it will work through the tables (after it's sorted the list of tables to lock) making sure the data is on disk and then getting a read lock on each table. If tables are locked by another thread your request will wait until the locks are freed up. The bottom line is that if you have a busy database with many tables and very active writing there's a chance your FLUSH TABLES WITH READ LOCK could take time. The issue there is if tables 1, 2, and 3 get locked but then table 4 has just started a massive update you will have to wait until table 4 is done to get the lock while tables 1, 2, and 3 are also locked.

The other thing about getting read locks is that if you're using a transactional engine, transactions are committed as a part of getting the read lock. There's some extra work to be done to avoid the ramifications of committing in-process transactions.

Disk Requirements

LVM snapshots work by keeping an exception log. As long as the snapshot is in existence (until it's removed with lvremove), LVM keeps track of changes on the disk in a log. If you're using the snapshot data it knows that certain parts of the disk must be pulled from the exception log for data that has changed since the snapshot was created.

This is where the --size option comes in. When you issue the lvcreate --snapshot command you tell it how much space to allow for recording changes. If you have a very active database you'll need to make sure that the amount of data that will change over the time you are using the snapshot will fit in your specified allocation. The manpage recommends 15-20% of the disk size.

Performance

While a snapshot is active LVM has to keep track of the changes. Any time a change is made on the disk LVM has to make a note of it in the exception file. I'm not sure how significant this is, it depends on your database activity, size, and how long you need the snapshot. The advice seems to be to take the snapshot, mount it, copy the data off, and remove it as quickly as possible.

Jeremy Cole wrote about an open-source pluggable tool he's writing for things like LVM. At the UC he was talking about doing some more serious testing of the exception file size and performance hit you take with active shapshots. Will be interesting to see what comes from that.

Update: MyISAM vs InnoDB

Harrison (who needs to start a weblog) points out that my post is MyISAM-centric. With InnoDB the LVM snapshot can be taken directly from the filesystem without locking the tables. On restore InnoDB will recover as if there was a system crash, rolling the data forward from the binary logs. Given that, it seems like LVM snapshots give you something very close to the ibbackup tool you can buy from Innobase Oracle.

Posted by mike at May 4, 2006 8:54 PM

Hard Drive Recovery Group offers hard disk data recovery services for RAID, laptops and servers. Complete clean room and hard drive repair service.

Trackback Pings

TrackBack URL for this entry:
http://mike.kruckenberg.com/mt/mt-tb.cgi/863

Comments

It is worth noting that if you are using InnoDB, and both the log and datafiles are backup-able (is that a word?) in a single snapshot by LVM, then you *don't* need to lock the tables first.

Since LVM takes a consistent snapshot, InnoDB will be able to recover, just as if there was a system crash, from the LVM backup when restored. It even tells you the binary log coordinates to recover from during the crash recovery for replaying the binary logs.

Posted by: Harrison Fisk at May 5, 2006 12:22 PM

just a note to anyone hoping to start using lvm snapshots on an already running system - even if the machine is already using lvm, you're probably out of luck. let's take a look at an example system.

[root@db ~]# uname -a
Linux db 2.6.11-1.1369_FC4smp #1 SMP Thu Jun 2 23:08:39 EDT 2005 i686 i686 i386 GNU/Linux

[root@db ~]# df -h
Filesystem Size Used Avail Use%
/dev/mapper/VolGroup00-LogVol00 658G 51G 574G 9%
dev/sda1 183M 14M 160M 9%
/dev/shm 4.0G 4.0K 4.0G 1%

this shows that we're using lvm and that we've got plenty of free space. let's try to create a snapshot:

[root@db ~]# lvcreate --size 100m --snapshot --name snap --permission r --verbose /dev/VolGroup00/LogVol00
Setting chunksize to 16 sectors.
Finding volume group "VolGroup00"
Rounding up size to full physical extent 128.00 MB
Insufficient free extents (1) in volume group VolGroup00. 4 required.

that didn't work. we know we have plenty of free space, but that's not what the error message said, it said we don't have enough free extents. ok, let's look at our extents:

[root@db ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 682.91 GB
PE Size 32.00 MB
Total PE 21853
Alloc PE / Size 21852 / 682.88 GB
Free PE / Size 1 / 32.00 MB

so we have 1 unused physical extent ("Free PE") which is 32MB, not enough to create a snapshot and pull the backup off on a system which is changing relatively rapidly. how did this happen? it's fairly normal when setting up a system to allocate the entire disk available to filesystems, so that's what we did and this is what we ended up with.

you can free extents by resizing the filesystem (ext3 in this case) via resize2fs and then reducing the volume size via lvreduce, but this is too high-risk a procedure for most people to run on a production machine. it seems that in order to take advantage of lvm you had to have planned for it when you set the system up (reasonable and prudent, yes, but not relevant if you're inheriting an existing setup).

ps if anyone can prove me wrong here i will be overjoyed

Posted by: bt at October 28, 2006 2:59 PM

You have created one huge volume with size of 682.88 GB ("Alloc PE / Size 21852 / 682.88 GB"), and it's almost empty ("dev/mapper/VolGroup00-LogVol00 658G 51G 574G 9%"), which means you can resize it (actually, you have to resize it if you want to create a new volume).
If that partition formatted with ext3 use resize2fs (if it is a reiserfs partition use resize_reiserfs) then lvreduce to free space in VolGroup00 then you can create new volumes/partitions, etc..

Posted by: Todvard at December 12, 2006 11:41 AM

In reply to:

"just a note to anyone hoping to start using lvm snapshots on an already running system - even if the machine is already using lvm, you're probably out of luck."

Well it sure is troublesome. But I did it and I did'nt have LVM at all. The hard part was to be able to just unmount /var which required a reboot really.

But if you already have LVM and you get
"Insufficient free extents (1) in volume group VolGroup00. 4 required." I guess you just could do the same as I did. lvresize... without resizing the ext3 partition below of course, since then you're back to square 1.


Posted by: Marcus Herou at December 22, 2006 1:22 AM

How about temporarily adding a ramdisk to the VG to put the snapshot on? Anybody tried that? I'm thinking it might be big enough to last while a second rsync backup pass is run. ie

rsync the real disk
make the snapshot
rsync the snapshot
delete the snapshot

it would need to be bigger than the default ramdisk size (16MB on our machines). Would we get that memory back afterwards?

Al

Posted by: Alastair Young at December 22, 2006 6:57 PM

Post a comment