01-30-2008
Howto clone/migrate a volume in the SAN
Dear Srs,
I have a Linux server (linux01) booting from SAN with a volume in a Nexsan SATAbeast storage array (san01). The disk/volume has four ext3 partitions, total size is near to 400GB, but only 20-30GB are in use.
I need to move this disk/volume to another Nexsan SATAbeast storage array (san02). SATABeast GUI is very simple.. and I haven't any option to migrate, clone, move, etc.. so I need to make this with other tools ;-(
What is the recommended method to make this? I have make this operations early with UNIX tools like rsync, dd, cpio, tar, etc..
I'm planning to use a second Linux server, linux02, and give access to old volume in san01 and the new volume in san02, and clone hard disk in this server. After cloning the drive, re-configure boot from SAN to boot from the new storage array, san02.
I'm looking for a more simple tool to make this type of operations, perhaps a front-end such as G4L, Partimage, Mondo-Mindi, etc..
dd can be a good and simple method, but whats happens when source and destination disk has different size?
Thanks for all the comments and advices!!
Regards,
10 More Discussions You Might Find Interesting
1. Solaris
Hi everyone,
I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN?
At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies
2. AIX
Hi,
Do you have procedures to migrate ssa disks to san disk?
I don't have testing environment and I want my file system be migrated one at a time. I want it to be fast, I have only 15-20 hours to do it on every sunday 1PM till monday 7am. My largest file system is about 150G.
The AIX is... (0 Replies)
Discussion started by: itik
0 Replies
3. Red Hat
Hi,
We have 200GB SAN volume mounted on Redhat EL 5. which is working fine. As my SAN supports dynamic resizing of volumes, i unmounted the volume and resized the SAN Volume to 300 GB successfully. Then i mounted again but it shows 200GB only but data is intact.
Now, my requirement is to let... (3 Replies)
Discussion started by: prvnrk
3 Replies
4. AIX
Hello all. I have a perplexing problem
I have an AIX 5.1 system on an EMC SAN.
This system had been on a CX400 SAN for several years. The system was migrated to a CX700 just over a week ago. The migration consisted of utilizing on of the HBAs in the system and connecting to both SANs
... (9 Replies)
Discussion started by: mhenryj
9 Replies
5. OS X (Apple)
as the title states, i cant mount suse of apple volumes on either box. have tryed afpfs-ng but no love.
anyone have a suggestion than samba (because i dislike MS) and NFS because i don't know jack about it..... yet
thanks in advance
julz (4 Replies)
Discussion started by: biorhythm
4 Replies
6. Solaris
I created a snapshot and subsequent clone of a zfs volume. But now i 'm not able to remove the snapshot it gives me following error
zfs destroy newpool/ldom2/zdisk4@bootimg
cannot destroy 'newpool/ldom2/zdisk4@bootimg': snapshot has dependent clones
use '-R' to destroy the following... (7 Replies)
Discussion started by: fugitive
7 Replies
7. Solaris
Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain
VDS
NAME LDOM VOLUME DEVICE
primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso
cdrom ... (16 Replies)
Discussion started by: fugitive
16 Replies
8. AIX
Hello Everyone,
Can someone help me to mount a SAN hdisk which contains a clone data copy(san) of the remote server to the another machine. Both servers are running in AIX.
Thanks in advance !
Regards,
Gowtham.G (3 Replies)
Discussion started by: gowthamakanthan
3 Replies
9. UNIX for Dummies Questions & Answers
Hi,
I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies
1)Physical Volume
2)Volume Group
3)Logical Volume
4)Physical Partition
Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies
10. Red Hat
I have an IBM blade running RHEL 5.4 server, connected to two Hitachi SANs using common fibre cards & Brocade switches. It has two volume groups made from old SAN LUNs. The old SAN needs to be retired so we allocated LUNs from the new SAN, discovered the LUNs as multipath disks (4 paths) and grew... (4 Replies)
Discussion started by: rbatte1
4 Replies
vgmove(1M) vgmove(1M)
NAME
vgmove - move data from an old set of disks in a volume group to a new set of disks
SYNOPSIS
autobackup] diskmapfile vg_name
autobackup] diskfile diskmapfile vg_name
DESCRIPTION
The command migrates data from the existing set of disks in a volume group to a new set of disks. After the command completes successfully,
the new set of disks will belong to the same volume group. The command is intended to migrate data on a volume group from old storage to
new storage. The diskmapfile specifies the list of source disks to move data from, and the list of destination disks to move data to. The
user may choose to list only a subset of the existing physical volumes in the volume group that need to be migrated to a new set of disks.
The format of the diskmapfile file is shown below:
source_pv_1 destination_pv_1_1 destination_pv_1_2 ....
source_pv_2 destination_pv_2_1 destination_pv_2_2 ....
....
source_pv_n destination_pv_n_1 destination_pv_n_2 ....
If a destination disk is not already part of the volume group, it will be added using see vgextend(1M). Upon successful completion of the
source disk will be automatically removed from the volume group using see vgreduce(1M).
After successful migration, the destination disks are added to the LVM configuration files; namely, or The source disks along with their
alternate links are removed from the LVM configuration files.
A sample diskmapfile is shown below:
/dev/disk/disk1 /dev/disk/disk51 /dev/disk/disk52
/dev/disk/disk2 /dev/disk/disk51
/dev/disk/disk3 /dev/disk/disk53
The diskmapfile can be manually created, or it can be automatically generated using the diskfile and diskmapfile options. The argument
diskfile contains a list of destination disks, one per line such as the sample file below:
/dev/disk/disk51
/dev/disk/disk52
/dev/disk/disk53
When the option is given, reads a list of destination disks from diskfile, generates the source to destination mapping, and saves it to
diskmapfile.
The volume group must be activated before running the command. If the command is interrupted before it completes, the volume group is in
the same state it was at the beginning of the command. The migration can be continued by running the command with the same options and
disk mapping file.
Options and Arguments
The command recognizes the following options and arguments:
vg_name The path name of the volume group.
Set automatic backup for this invocation of
autobackup can have one of the following values:
Automatically back up configuration changes made to the volume group.
This is the default.
After this command executes, the command is executed for the volume group; see vgcfgbackup(1M).
Do not back up configuration changes this time.
Specify the name of the file containing the
source to destination disk mapping. If the option is also given, will generate the disk mapping and save it to
this filename. (Note that if the diskmapfile already exists, the file will be overwritten). Otherwise, will
perform the data migration using this diskmapfile.
Specify the name of the file containing the
list of destination disks. This option is used with the option to generate the diskmapfile.
When the option is used, no volume group data is moved.
Preview the actions to be taken but do not
move any volume group data.
Shared Volume Group Considerations
For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or
higher), can be performed when activated in either shared, exclusive, or standalone mode.
Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M).
When a node wants to share the volume group, the user must first execute a if physical volumes were moved in or out of the volume group at
the time the volume group was not activated on that node.
LVM shared mode is currently only available in Serviceguard clusters.
EXTERNAL INFLUENCES
Environment Variables
determines the language in which messages are displayed.
If is not specified or is null, it defaults to "C" (see lang(5)).
If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)).
EXAMPLES
Move data in volume group from to After the migration, remove from the volume group:
Generate a source to destination disk map file for where the destination disks are and
SEE ALSO
lvmpud(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgextend(1M), vgreduce(1M), intro(7), lvm(7).
vgmove(1M)