Sponsored Content
Operating Systems Solaris SVM Soft partitions in a metaset- how to migrate ? Post 302448307 by unahb1 on Wednesday 25th of August 2010 05:47:00 PM
Old 08-25-2010
thanks a lot dukenuke for prompt response. In this case what;s the best way to migrate the data ?? rsync ? tar/cpio ?? also the filesystems are mounted and in use constantly.

Last edited by DukeNuke2; 08-26-2010 at 10:06 AM..
 

9 More Discussions You Might Find Interesting

1. Solaris

Sun Cluster-Metaset problem

Hi Guys i have two servers which are n cluster prd1 and prd2.. I already have a metaset configured in it..My job was to create a 1.8 tb LUN in my storage which is having Hardware raid5 pre-configured..i Created 1.8 tb with raid 5 in my storage and it got detected in both t servers prd1... (4 Replies)
Discussion started by: madanmeer
4 Replies

2. Solaris

Facing Problem with metaset in SVM

hi all, i am using solaris 5.10 on sun blade 150 and i am trying to configure diskset in sun volume manager. When i fire the following command, it says some rpc related error. bash-3.00# metaset -s kingston -a -h u15_9 metaset: u15_9: metad client create: RPC: Program not registered how to... (4 Replies)
Discussion started by: kingston
4 Replies

3. Solaris

Replacing a hard disk (SVM) with a soft partition?

The following is the summarry:- 1) Four disks in server ie (c1t0d0. c1t1d0, c1t2d0, c1t3d0). c1t2d0 is the disk to be replaced. c1t0d0 and c1t2d0 are mirrors. c1t1d0 and c1t3d0 are mirrors. Metadb to be deleted is in c1t2d0s7 a) Mirror d35 has 2 submirrors d38 and d39 d38 is a stripe... (0 Replies)
Discussion started by: aji1729
0 Replies

4. Solaris

SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node

Hi, Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is... (0 Replies)
Discussion started by: dn2011
0 Replies

5. Solaris

Replacement of metadevice with soft partitions

Hi guys, currently I'm working on one problem. I have Sol10 box where I need to replace old storage for the new one with minimal downtime. Layout of metadevice looks like this: d103 p 1.0GB d100 d102 p 2.0GB d100 d101 p 1.0GB d100 d100 s ... (3 Replies)
Discussion started by: brusell
3 Replies

6. Solaris

Replace LUN with soft partitions

Hi all, can somebody provide me with advice if is possible to replace physical LUN, with soft partitions (located directly on LUN, not on metadevice) by the new one. Current state looks like this: root@xxx:/etc/zones # metastat -c| egrep -i 900bec2d286 d504 p 159GB... (1 Reply)
Discussion started by: brusell
1 Replies

7. Solaris

Metaset repartitioning

Dear all, This metaset stuff drive me crazy. The story begin with the Solaris 8 upgrade.... We have a pair of Solaris 8 with Sun Cluster 3.1, to prevent a long downtime, the Live Upgrade was chosen. As metadb cannot use LU to upgrade directly, we remove the diskset before upgrade, and put it... (0 Replies)
Discussion started by: donaldfoo
0 Replies

8. UNIX for Advanced & Expert Users

live upgrade with raid0 soft partitions

Hi, I have this mirrored system with soft-partitions. I have a difficulty determining the lucreate cmd in this env. #metastat -p d0 -m d10 d20 1 d10 1 1 c1t2d0s0 d20 1 1 c1t3d0s0 d1 -m d11 d21 1 d11 1 1 c1t2d0s5 d21 1 1 c1t3d0s5 d100 -p d1 -o 58720384 -b 8388608 d200 -p d1 -o... (1 Reply)
Discussion started by: chaandana
1 Replies

9. Solaris

SVM RAID5: Can an app access raw partitions?

I am using Solaris 9 (Sparc based) with Sybase and a proprietary DB application that works with Sybase. In the past we have not used SVM or any RAID config. The DBs were configured such that each DB had its own partition. Now I would like to setup a new machine with the DBs on a RAID5 config... (1 Reply)
Discussion started by: DavidC_SysEngr
1 Replies
vzmigrate(8)							    Containers							      vzmigrate(8)

NAME
vzmigrate - migrate a container between two OpenVZ servers SYNOPSIS
vzmigrate [-r|--remove-area yes|no] [--ssh=ssh_options] [--rsync=rsync_options] [--keep-dst] [--online] [-v] destination_address CTID DESCRIPTION
This utility is used to migrate a container from one (source) Hardware Node (HN) to another (destination) HN. The utility can migrate either stopped or running container. For a stopped container, simple CT private area transfer is performed (rsync(1) is used for file transfer). For running containers, migration may be offline (default) or online. This program uses ssh as a transport layer. You will need to put ssh public key to destination node and be able to connect to node without entering password. OPTIONS
-r, --remove-area yes | no Whether to remove a container area on source HN for the successfully migrated container. Default is yes. --ssh=options Additional options that will be passed to ssh while establishing connection to destination HN. --rsync=options Additional options that will be passed to rsync(8). You may add options like -z to enable data compression if you are migrating over a slow link. --keep-dst Do not clean synced destination container private area in case of some error. It makes sense to use this option on big container migration to avoid syncing container private area again in case some error (on container stop for example) occurs during first migration attempt. --online Perform online (zero-downtime) migration: during the migration the container hangs for a while and after the migration it continues working as though nothing has happened. -v Verbose mode. Causes vzmigrate to print debugging messages about its progress. Multiple -v options increase the verbosity. The maximum is 3. EXAMPLES
Migration of CT 101 to 192.168.1.130 with downtime: vzmigrate 192.168.1.130 101 Online migration of CT 102 to 192.168.1.130: vzmigrate --online 192.168.1.130 102 EXIT STATUS
0 EXIT_OK Command completed successfully. 1 EXIT_USAGE Bad command line options. 2 EXIT_VE_STOPPED Container is stopped. 4 EXIT_CONNECT Can't connect to destination (source) HN. 6 EXIT_COPY Container private area copying/moving failed. 7 EXIT_VE_START Can't start or restore destination CT. 8 EXIT_VE_STOP Can't stop or checkpoint source CT. 9 EXIT_EXISTS Container already exists on destination HN. 10 EXIT_NOTEXIST Container does not exists on source HN. 12 EXIT_IP_INUSE You attempt to migrate CT which IP address(es) are already in use on the destination node. 13 EXIT_QUOTA Operation with CT quota failed. SEE ALSO
rsync(1). COPYRIGHT
Copyright (C) 2001-2010, Parallels, Inc. Licensed under GNU GPL. OpenVZ 28 Jun 2011 vzmigrate(8)
All times are GMT -4. The time now is 01:16 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy