Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems


 
Thread Tools Search this Thread
Operating Systems Solaris Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems
# 1  
Old 06-18-2015
Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems

So,

We have a Netapp storage solution. We have Sparc T4-4s running with LDOMS and client zones in the LDOMS, We are using FC for storage comms. So here's the basic setup

FC luns are exported to the primary on the Sparc box. using LDM they are then exported to the LDOM using vdisk. at the LDOM level zpools are created on the vdisks. zfs fs are then created and provided the zone as a dataset and the zfs mountpoint set to a local path in the zone.

here's the rub/question. What is the procedure to restore the FC Lun on the Netapp from a snapshot without having to reboot the zone?

if I do it without rebooting the zone, zpool in the LDOM shows the LUN as degraded/corrupt.

ideas?
# 2  
Old 06-19-2015
Reverting storage snapshot on live filesystem is not possible.
Or any snapshot as far as that goes.

You will need to export the zpool and then restore storage snapshot and import the zpool.
No reboot should be required, unless it is a root zpool, then you will have to power off the ldom, revert, power on.

Is there a reason you are not using builtin zfs snapshoting ? It is much more flexible and a lot of things can be done on live systems (no need to export the zpool), but you will still need to stop the services which are using those filesystems.


Hope that helps
Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
# 3  
Old 06-19-2015
Why we are using storage snaps instead of zfs snaps

The reasons are numerous and lengthly, but the TL,DR is infrastructure and DR.

With reference to your reboot comment: This is a zpool with exported datasets to a non-global zone. You can't export the zpool unless the dataset isn't being used (i.e. the zone with the dataset assigned is offline) and thus the requirement for the zone reboot.

I'm looking into replacating the functionality of time-slider with scripts and crons.
# 4  
Old 06-19-2015
I hope you're not using hardware snapshots to replicate live filesystems now.

You seem to be thinking of using zfs snapshots. That would be much better, and not hard to implement.

Just remember to disable the SSH escape character if you use something like

Code:
zfs send ... | ssh -e none  ... zfs receive ...

# 5  
Old 06-19-2015
yes, we are currently using netapp managed snapshots for backup and recovery.

however after this conversation and other reading on online. specificially this

Automatic ZFS Snapshot Rotation on FreeBSD | Thinking Sysadmin

I've modified that code to the code below:
Code:
#!/usr/bin/bash
# Path to ZFS executable:
ZFS=/usr/sbin/zfs
# Parse arguments:
TARGET=$1
SNAP=$2
COUNT=$3
mount=`$ZFS get -H -o value mountpoint $TARGET`
# Function to display usage:
usage() {
    scriptname=`/usr/bin/basename $0`
    echo "$scriptname: Take and rotate snapshots on a ZFS file system"
    echo
    echo "  Usage:"
    echo "  $scriptname target snap_name count"
    echo
    echo "  target:    ZFS file system to act on"
    echo "  snap_name: Base name for snapshots, to be followed by a '.' and"
    echo "             an integer indicating relative age of the snapshot"
    echo "  count:     Number of snapshots in the snap_name.number format to"
    echo "             keep at one time.  Newest snapshot ends in '.0'."
    echo
    exit
}
# Basic argument checks:
if [ -z $COUNT ] ; then
    usage
fi
if [ ! -z $4 ] ; then
    usage
fi
# Snapshots are number starting at 0; $max_snap is the highest numbered
# snapshot that will be kept.
max_snap=$(($COUNT -1))
# Clean up oldest snapshot:
if [ -d /$mount/.zfs/snapshot/$SNAP.$max_snap ] ; then
    $ZFS destroy -r $TARGET@$SNAP.$max_snap
fi
# Rename existing snapshots:
dest=$max_snap
while [ $dest -gt 0 ] ; do
    src=$(($dest - 1))
    if [ -d /$mount/.zfs/snapshot/$SNAP.$src ] ; then
    $ZFS rename -r $TARGET@$SNAP.$src $TARGET@$SNAP.$dest
    fi
    dest=$(($dest - 1))
done
# Create new snapshot:
$ZFS snapshot -r $TARGET@$SNAP.0

and it appears to be working quite nicely.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Linux

Identify newly attached LUN from NetApp

Hi I need to identify a newly attached LUN from NetApp on a linuxserver running uname -o GNU/Linux I have first run the df -h and got the following: df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_outsystemdb-lv_root 50G 2.7G 45G ... (3 Replies)
Discussion started by: fretagi
3 Replies

2. Red Hat

DM multipath :iscsi lun shows by diff names on two server where its exported

Hi, I am trying to setup multipathing (using DM multipath) for a redhat cluster setup ...all setup is done but issue is : node 1 shows the shared iscsi lun as sdc node 2 shows the same as sdg (changes on reboots) Due to this (i guess) i get i/o error & i can not read files created by... (0 Replies)
Discussion started by: heman96
0 Replies

3. Solaris

How to tell what disks are used for a zpool?

Hello, Does anyone know how I can tell what disk are being not being used by a zpool? For example in Veritas Volume manager, I can run a "vxdisk list" and disks that are marked as "online invalid" are disk that are not used. I'm looking for a similar command in ZFS which will easily show... (5 Replies)
Discussion started by: robertinoau
5 Replies

4. AIX

Netapp iscsi lun

Hi, I have aix 6.1 box. I want to configure iscsi luns from netapp storage. I tried in google but not getting proper solution for that. i m not getting the proper iqn name. Please share me the steps to complete this requirements. Thanks in advance. (1 Reply)
Discussion started by: sunnybee
1 Replies

5. AIX

volume group lun sizes and no of file systems for optimal performance

Hello, It's been a while since I've done AIX..., but I'm planning a new TSM on AIX disk-only backup solution. I'm planning to make an AIX volume group out of 40 luns of 1 TB. I'm planning to create one big file system on here. The purpose for this is to use this as a device class FILE for... (5 Replies)
Discussion started by: smashingpumpkin
5 Replies

6. Solaris

Zfs::zpool.cache file

Hi All, I am trying to read zpool.cache file to find out pool information like pool name, devices it uses and all properties. File seems to be in packed format.I am not sure how to unpack it. But from opensolaris code base we can see that they have used libz for uncompromising this file, but... (0 Replies)
Discussion started by: shailesh_111
0 Replies

7. Solaris

Can't remove a LUN from a Zpool!

I am not seeing anyway to remove a LUN from a Zpool... Am I missing something? or do i have to destroy the zpool and recreate it? (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

8. Solaris

Remove the exported zpool

I had a pool which was exported and due to some issues on my SAN i was never able to import it again. Can anyone tell me how can i destroy the exported pool to free up the LUN. I tried to create a new pool on the same pool but it gives me following error # zpool create emcpool4 emcpower0c... (0 Replies)
Discussion started by: fugitive
0 Replies

9. Solaris

Can't see Netapp LUN on Solaris using LPFC after reboot.

Hi, I've just edited this post. I found the solution for this. Thanks. (0 Replies)
Discussion started by: gwhelan
0 Replies

10. Solaris

Netapp filer LUN Resize. Commands to run on Solaris afterwards.

Hi, I need to increase a veritas filesystem I have currently mounted on a Solaris 10 server. We can resize the LUN on the NetApp filer no problem. What I need to know is what do I do next on the Solaris 10 server I have so that it will see the increase in size. Do I run 'devfsadm' to... (3 Replies)
Discussion started by: gwhelan
3 Replies
Login or Register to Ask a Question