We're trying out a SAN migration from HP EVA to EMC VMAX, and run into a bit of an issue with powerpath and zfs.
The method we're currently using to migrate is to export the HP EVA luns from our sun server, replicate using SAN based method, and then present the new luns to our Sun server doing a zfs import.
The problem we have is when doing a zfs import, zfs chooses one of the 4 possible paths to the lun, instead of using the powerpath pseudo device.
Code:
zpool status output:
pool: tibcoapp
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tibcoapp ONLINE 0 0 0
c5t50000975F000258Dd20 ONLINE 0 0 0
/etc/powermt display dev=all
Pseudo name=emcpower29a
Symmetrix ID=0002987XXXXX
Logical device ID=1306
state=alive; policy=SymmOpt; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3073 pci@3,700000/SUNW,emlxs@0,1/fp@0,0 c3t50000975F0002589d20s0 FA 3gB active alive 0 0
3073 pci@3,700000/SUNW,emlxs@0,1/fp@0,0 c3t50000975F0002585d20s0 FA 2gB active alive 0 0
3077 pci@13,700000/SUNW,emlxs@0,1/fp@0,0 c5t50000975F000258Dd20s0 FA 4gB active alive 0 0
3077 pci@13,700000/SUNW,emlxs@0,1/fp@0,0 c5t50000975F0002581d20s0 FA 1gB active alive 0 0
Is there a way to force zfs to use the pseudo device: /dev/dsk/emcpower29a, instead of c5t50000975F000258Dd20, whch it is currently using?
I do know that adding a blank lun from the EMC SAN to the existing zpool as a mirror would be a lot simpler - but unfortunately this method is the one we have to use.
We currently have a Solaris box connected to a Clariion storage system that is utilising DMP for path failover. I would prefer to use EMC's Powerpath and was wondering if the two can coexist?
Basically, I am struggling to find any documentation on the subject and was wonder if anyone can give me... (2 Replies)
Hi all,
Currently i am involved in developing a device driver for a custom hardware.
My linux stack already has the sysfs directory structure
/sys/class/hwmon/
My need is that, while loading my device driver i need to create a "xyz" sysfs directory inside hwmon sysfs directory as... (0 Replies)
Friends,
Could u please clarify how does a Solaris 10 machine recognize a SATA hard disk, SATA CD/DVD drives.
Will it recognize it like a SCSI? eg.c0t0d0 or like ide ? eg.c0d0
thank u. (11 Replies)
Hi ,
I have a redhat 5.3 server which has 2 vg.. one is rootvg in local harddisk and another one is applicationvg in SAN.. When I reboot the server , EMC powerpath driver is not starting up automatically. Hence applicationvg is not mounting properly. Therefore I need to unmount it manually and... (4 Replies)
Dear All,
I was having powerpath 5.2 on SUN server with SVM connected to CLARIION box.Please find the following output :
root # powermt display dev=all
Pseudo name=emcpower3a
CLARiiON ID=CK200073400372
Logical device ID=60060160685D1E004DD97FB647BFDC11
state=alive; policy=CLAROpt;... (1 Reply)
I saved one of my zfs snapshot on the remote machine with following command. And now i want to restore the same snapshot to original server how can i receive it on the original server from backup server.
#zfs send rpool/ROOT/sol10_patched@preConfig | ssh x.x.x.x zfs receive... (1 Reply)
I'm trying to auto-mount EMC Symmetrix BCV device at boot.
but having problem making BCV available.
I put script called mkbcv to the inittab and engineer suggested to add
120 sec sleep between cfgmgr so I did that also.
My mkbcv script seems to be working fine, it says "hdisk4 Available" ... (1 Reply)
Hello guys, I'm going crazy over here with a problem with a LUN created on a EMC CX3.
I did sucessfully managed to create the LUN on the Storage (the LUN is named DBLNX25EC_TST), after doing the following process:
echo "1" > /sys/class/fc_host/host<n>/issue_lip
and
echo "- - -" >... (10 Replies)
It seems like I lost one path on my Solaris-11 box. But I want to make sure before going to Storage team, if issue is from OS side or Storage side. Storage team is able to see that only one wwwn is looged in their switch. I am not at server's physical location. What does below output says ? I... (0 Replies)
Hello All
I’ve made the decision to switch my storage server from FreeNAS to Solaris. I opted to use FreeNAS as it has ZFS and until BTRFS is stable, it’s the best option (IMHO) for backup and network storage.
The switch was facilitated by the USB stick that FreeNAS was on got lost during a... (1 Reply)
Discussion started by: BlueDalek
1 Replies
LEARN ABOUT FREEBSD
gptzfsboot
GPTZFSBOOT(8) BSD System Manager's Manual GPTZFSBOOT(8)NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers
DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a
GPT-partitioned disk with gpart(8).
IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less.
BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device
labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk
from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines
that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR
partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is
used as a default boot pool.
The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then
the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is
present in the boot filesystem, boot options are read from it in the same way as boot(8).
The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the
vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables.
USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and
interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports.
The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is
[zfs:pool/filesystem:][/path/to/loader]
Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and
filesystem are specified, then /boot/zfsloader is used as a path.
Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool
status (see zpool(8)).
The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial
value of the currdev variable.
FILES
/boot/gptzfsboot boot code binary
/boot.config parameters for the boot block (optional)
/boot/config alternative parameters for the boot block (optional)
EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gptzfsboot can also be installed without the PMBR:
gpart bootcode -p /boot/gptzfsboot -i 1 ada0
SEE ALSO boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8)HISTORY
gptzfsboot appeared in FreeBSD 7.3.
AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>.
BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions
that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off-
sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen.
BSD September 15, 2014 BSD