Sponsored Content
Operating Systems Solaris Emc powerpath device & zfs query Post 302868121 by Peasant on Saturday 26th of October 2013 02:00:04 AM
Old 10-26-2013
Just use zpool mirror/attach/detach.

Add a with same size or bigger from EVA storage in tibcoapp pool via attach.
After the resilver is complete, detach the EMC disk.

No need for SAN storage replication techniques.

Hope that helps
Regards
Peasant.
 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Can Veritas DMP & EMC PowerPath coexist?

We currently have a Solaris box connected to a Clariion storage system that is utilising DMP for path failover. I would prefer to use EMC's Powerpath and was wondering if the two can coexist? Basically, I am struggling to find any documentation on the subject and was wonder if anyone can give me... (2 Replies)
Discussion started by: aaron2k
2 Replies

2. Linux

Query about creating sysfs directory under device driver

Hi all, Currently i am involved in developing a device driver for a custom hardware. My linux stack already has the sysfs directory structure /sys/class/hwmon/ My need is that, while loading my device driver i need to create a "xyz" sysfs directory inside hwmon sysfs directory as... (0 Replies)
Discussion started by: cbalu
0 Replies

3. Solaris

Query related to device naming of SATA

Friends, Could u please clarify how does a Solaris 10 machine recognize a SATA hard disk, SATA CD/DVD drives. Will it recognize it like a SCSI? eg.c0t0d0 or like ide ? eg.c0d0 thank u. (11 Replies)
Discussion started by: saagar
11 Replies

4. Red Hat

Configure EMC Powerpath?

Hi , I have a redhat 5.3 server which has 2 vg.. one is rootvg in local harddisk and another one is applicationvg in SAN.. When I reboot the server , EMC powerpath driver is not starting up automatically. Hence applicationvg is not mounting properly. Therefore I need to unmount it manually and... (4 Replies)
Discussion started by: Makri
4 Replies

5. Emergency UNIX and Linux Support

Mapping between "Pseudo name" and "Logical device ID" in powerpath with SVM changed....

Dear All, I was having powerpath 5.2 on SUN server with SVM connected to CLARIION box.Please find the following output : root # powermt display dev=all Pseudo name=emcpower3a CLARiiON ID=CK200073400372 Logical device ID=60060160685D1E004DD97FB647BFDC11 state=alive; policy=CLAROpt;... (1 Reply)
Discussion started by: Reboot
1 Replies

6. Solaris

ZFS snapshot query

I saved one of my zfs snapshot on the remote machine with following command. And now i want to restore the same snapshot to original server how can i receive it on the original server from backup server. #zfs send rpool/ROOT/sol10_patched@preConfig | ssh x.x.x.x zfs receive... (1 Reply)
Discussion started by: fugitive
1 Replies

7. AIX

Help with EMC BCV device

I'm trying to auto-mount EMC Symmetrix BCV device at boot. but having problem making BCV available. I put script called mkbcv to the inittab and engineer suggested to add 120 sec sleep between cfgmgr so I did that also. My mkbcv script seems to be working fine, it says "hdisk4 Available" ... (1 Reply)
Discussion started by: shuhei365
1 Replies

8. Linux

EMC, PowerPath and issue on using LUN

Hello guys, I'm going crazy over here with a problem with a LUN created on a EMC CX3. I did sucessfully managed to create the LUN on the Storage (the LUN is named DBLNX25EC_TST), after doing the following process: echo "1" > /sys/class/fc_host/host<n>/issue_lip and echo "- - -" >... (10 Replies)
Discussion started by: Zarnick
10 Replies

9. Solaris

One emc powerpath failed

It seems like I lost one path on my Solaris-11 box. But I want to make sure before going to Storage team, if issue is from OS side or Storage side. Storage team is able to see that only one wwwn is looged in their switch. I am not at server's physical location. What does below output says ? I... (0 Replies)
Discussion started by: solaris_1977
0 Replies

10. Solaris

Setting up Solaris & ZFS for the first time

Hello All I’ve made the decision to switch my storage server from FreeNAS to Solaris. I opted to use FreeNAS as it has ZFS and until BTRFS is stable, it’s the best option (IMHO) for backup and network storage. The switch was facilitated by the USB stick that FreeNAS was on got lost during a... (1 Reply)
Discussion started by: BlueDalek
1 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 09:49 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy