Sponsored Content
Operating Systems Solaris Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems Post 302947643 by os2mac on Friday 19th of June 2015 06:45:46 PM
Old 06-19-2015
yes, we are currently using netapp managed snapshots for backup and recovery.

however after this conversation and other reading on online. specificially this

Automatic ZFS Snapshot Rotation on FreeBSD | Thinking Sysadmin

I've modified that code to the code below:
Code:
#!/usr/bin/bash
# Path to ZFS executable:
ZFS=/usr/sbin/zfs
# Parse arguments:
TARGET=$1
SNAP=$2
COUNT=$3
mount=`$ZFS get -H -o value mountpoint $TARGET`
# Function to display usage:
usage() {
    scriptname=`/usr/bin/basename $0`
    echo "$scriptname: Take and rotate snapshots on a ZFS file system"
    echo
    echo "  Usage:"
    echo "  $scriptname target snap_name count"
    echo
    echo "  target:    ZFS file system to act on"
    echo "  snap_name: Base name for snapshots, to be followed by a '.' and"
    echo "             an integer indicating relative age of the snapshot"
    echo "  count:     Number of snapshots in the snap_name.number format to"
    echo "             keep at one time.  Newest snapshot ends in '.0'."
    echo
    exit
}
# Basic argument checks:
if [ -z $COUNT ] ; then
    usage
fi
if [ ! -z $4 ] ; then
    usage
fi
# Snapshots are number starting at 0; $max_snap is the highest numbered
# snapshot that will be kept.
max_snap=$(($COUNT -1))
# Clean up oldest snapshot:
if [ -d /$mount/.zfs/snapshot/$SNAP.$max_snap ] ; then
    $ZFS destroy -r $TARGET@$SNAP.$max_snap
fi
# Rename existing snapshots:
dest=$max_snap
while [ $dest -gt 0 ] ; do
    src=$(($dest - 1))
    if [ -d /$mount/.zfs/snapshot/$SNAP.$src ] ; then
    $ZFS rename -r $TARGET@$SNAP.$src $TARGET@$SNAP.$dest
    fi
    dest=$(($dest - 1))
done
# Create new snapshot:
$ZFS snapshot -r $TARGET@$SNAP.0

and it appears to be working quite nicely.
 

10 More Discussions You Might Find Interesting

1. Solaris

Netapp filer LUN Resize. Commands to run on Solaris afterwards.

Hi, I need to increase a veritas filesystem I have currently mounted on a Solaris 10 server. We can resize the LUN on the NetApp filer no problem. What I need to know is what do I do next on the Solaris 10 server I have so that it will see the increase in size. Do I run 'devfsadm' to... (3 Replies)
Discussion started by: gwhelan
3 Replies

2. Solaris

Can't see Netapp LUN on Solaris using LPFC after reboot.

Hi, I've just edited this post. I found the solution for this. Thanks. (0 Replies)
Discussion started by: gwhelan
0 Replies

3. Solaris

Remove the exported zpool

I had a pool which was exported and due to some issues on my SAN i was never able to import it again. Can anyone tell me how can i destroy the exported pool to free up the LUN. I tried to create a new pool on the same pool but it gives me following error # zpool create emcpool4 emcpower0c... (0 Replies)
Discussion started by: fugitive
0 Replies

4. Solaris

Can't remove a LUN from a Zpool!

I am not seeing anyway to remove a LUN from a Zpool... Am I missing something? or do i have to destroy the zpool and recreate it? (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

5. Solaris

Zfs::zpool.cache file

Hi All, I am trying to read zpool.cache file to find out pool information like pool name, devices it uses and all properties. File seems to be in packed format.I am not sure how to unpack it. But from opensolaris code base we can see that they have used libz for uncompromising this file, but... (0 Replies)
Discussion started by: shailesh_111
0 Replies

6. AIX

volume group lun sizes and no of file systems for optimal performance

Hello, It's been a while since I've done AIX..., but I'm planning a new TSM on AIX disk-only backup solution. I'm planning to make an AIX volume group out of 40 luns of 1 TB. I'm planning to create one big file system on here. The purpose for this is to use this as a device class FILE for... (5 Replies)
Discussion started by: smashingpumpkin
5 Replies

7. AIX

Netapp iscsi lun

Hi, I have aix 6.1 box. I want to configure iscsi luns from netapp storage. I tried in google but not getting proper solution for that. i m not getting the proper iqn name. Please share me the steps to complete this requirements. Thanks in advance. (1 Reply)
Discussion started by: sunnybee
1 Replies

8. Solaris

How to tell what disks are used for a zpool?

Hello, Does anyone know how I can tell what disk are being not being used by a zpool? For example in Veritas Volume manager, I can run a "vxdisk list" and disks that are marked as "online invalid" are disk that are not used. I'm looking for a similar command in ZFS which will easily show... (5 Replies)
Discussion started by: robertinoau
5 Replies

9. Red Hat

DM multipath :iscsi lun shows by diff names on two server where its exported

Hi, I am trying to setup multipathing (using DM multipath) for a redhat cluster setup ...all setup is done but issue is : node 1 shows the shared iscsi lun as sdc node 2 shows the same as sdg (changes on reboots) Due to this (i guess) i get i/o error & i can not read files created by... (0 Replies)
Discussion started by: heman96
0 Replies

10. Linux

Identify newly attached LUN from NetApp

Hi I need to identify a newly attached LUN from NetApp on a linuxserver running uname -o GNU/Linux I have first run the df -h and got the following: df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_outsystemdb-lv_root 50G 2.7G 45G ... (3 Replies)
Discussion started by: fretagi
3 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 06:01 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy