Sponsored Content
Operating Systems Solaris Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems Post 302947612 by os2mac on Friday 19th of June 2015 01:33:58 PM
Old 06-19-2015
Why we are using storage snaps instead of zfs snaps

The reasons are numerous and lengthly, but the TL,DR is infrastructure and DR.

With reference to your reboot comment: This is a zpool with exported datasets to a non-global zone. You can't export the zpool unless the dataset isn't being used (i.e. the zone with the dataset assigned is offline) and thus the requirement for the zone reboot.

I'm looking into replacating the functionality of time-slider with scripts and crons.
 

10 More Discussions You Might Find Interesting

1. Solaris

Netapp filer LUN Resize. Commands to run on Solaris afterwards.

Hi, I need to increase a veritas filesystem I have currently mounted on a Solaris 10 server. We can resize the LUN on the NetApp filer no problem. What I need to know is what do I do next on the Solaris 10 server I have so that it will see the increase in size. Do I run 'devfsadm' to... (3 Replies)
Discussion started by: gwhelan
3 Replies

2. Solaris

Can't see Netapp LUN on Solaris using LPFC after reboot.

Hi, I've just edited this post. I found the solution for this. Thanks. (0 Replies)
Discussion started by: gwhelan
0 Replies

3. Solaris

Remove the exported zpool

I had a pool which was exported and due to some issues on my SAN i was never able to import it again. Can anyone tell me how can i destroy the exported pool to free up the LUN. I tried to create a new pool on the same pool but it gives me following error # zpool create emcpool4 emcpower0c... (0 Replies)
Discussion started by: fugitive
0 Replies

4. Solaris

Can't remove a LUN from a Zpool!

I am not seeing anyway to remove a LUN from a Zpool... Am I missing something? or do i have to destroy the zpool and recreate it? (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

5. Solaris

Zfs::zpool.cache file

Hi All, I am trying to read zpool.cache file to find out pool information like pool name, devices it uses and all properties. File seems to be in packed format.I am not sure how to unpack it. But from opensolaris code base we can see that they have used libz for uncompromising this file, but... (0 Replies)
Discussion started by: shailesh_111
0 Replies

6. AIX

volume group lun sizes and no of file systems for optimal performance

Hello, It's been a while since I've done AIX..., but I'm planning a new TSM on AIX disk-only backup solution. I'm planning to make an AIX volume group out of 40 luns of 1 TB. I'm planning to create one big file system on here. The purpose for this is to use this as a device class FILE for... (5 Replies)
Discussion started by: smashingpumpkin
5 Replies

7. AIX

Netapp iscsi lun

Hi, I have aix 6.1 box. I want to configure iscsi luns from netapp storage. I tried in google but not getting proper solution for that. i m not getting the proper iqn name. Please share me the steps to complete this requirements. Thanks in advance. (1 Reply)
Discussion started by: sunnybee
1 Replies

8. Solaris

How to tell what disks are used for a zpool?

Hello, Does anyone know how I can tell what disk are being not being used by a zpool? For example in Veritas Volume manager, I can run a "vxdisk list" and disks that are marked as "online invalid" are disk that are not used. I'm looking for a similar command in ZFS which will easily show... (5 Replies)
Discussion started by: robertinoau
5 Replies

9. Red Hat

DM multipath :iscsi lun shows by diff names on two server where its exported

Hi, I am trying to setup multipathing (using DM multipath) for a redhat cluster setup ...all setup is done but issue is : node 1 shows the shared iscsi lun as sdc node 2 shows the same as sdg (changes on reboots) Due to this (i guess) i get i/o error & i can not read files created by... (0 Replies)
Discussion started by: heman96
0 Replies

10. Linux

Identify newly attached LUN from NetApp

Hi I need to identify a newly attached LUN from NetApp on a linuxserver running uname -o GNU/Linux I have first run the df -h and got the following: df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_outsystemdb-lv_root 50G 2.7G 45G ... (3 Replies)
Discussion started by: fretagi
3 Replies
TARGETS(5)						      BSD File Formats Manual							TARGETS(5)

NAME
targets -- configuration file for iSCSI targets SYNOPSIS
targets DESCRIPTION
The targets file describes the iSCSI storage which is presented to iSCSI initiators by the iscsi-target(8) service. A description of the iSCSI protocol can be found in Internet Small Computer Systems Interface RFC 3720. Each line in the file (other than comment lines that begin with a '#') specifies an extent, a device (made up of extents or other devices), or a target to present to the initiator. Each definition, an extent, a device, and a target, is specified on a single whitespace delimited line. The extent definition specifies a piece of storage that will be used as storage, and presented to initiators. It is the basic definition for an iSCSI target. Each target must contain at least one extent definition. The first field in the definition is the extent name, which must begin with the word ``extent'' and be followed by a number. The next field is the file or NetBSD device which will be used as persistent storage. The next field is the offset (in bytes) of the start of the extent. This field is usually 0. The fourth field in the definition is the size of the extent. The basic unit is bytes, and the shorthand KB, MB, GB, and TB can be used for kilobytes (1024 bytes), megabytes (1024 kilobytes), gigabytes (1024 megabytes), and terabytes (1024 gigabytes) respectively. It is possible to use the word ``size'' to use the full size of the pre-existing regular file given in the extent name. The device definition specifies a LUN or device, and is made up of extents and other devices. It is possible to create hierarchies of devices using the device definition. The first field in the definition is the device name, which must begin with the word ``device'' and be followed by a number. The next field is the type of resilience that is to be provided by the device. For simple devices, RAID0 suffices. Greater resilience can be gained by using the RAID1 resilience field. Following the resilience field is a list of extents or other devices. Large devices can be created by using multiple RAID0 extents, in which case each extent will be concatenated. Resilient devices can be cre- ated by using multiple RAID1 devices or extents, in which case data will be written to each of the devices or extents in turn. If RAID1 resilience is used, all the extents or sub-devices must be the same size. Please note that RAID1 recovery is not yet supported by the iscsi-target(8) utility. An extent or sub-device may only be used once. The target definition specifies an iSCSI target, which is presented to the iSCSI initiator. Multiple targets can be specified. The first field in the definition is the target name, which must begin with either of the words ``target'' or ``lun'' and be followed by a number. Optionally, if a target is followed by an ``='' sign and some text, the text is taken to be that of the iSCSI Qualified Name of the target. This IQN is used by the initiator to connect to the appropriate target. The next field is a selector for whether the storage should be pre- sented as writable, or merely as read-only storage. The field of ``rw'' denotes read-write storage, whilst ``ro'' denotes read-only storage. The next field is the device or extent name that will be used as persistent storage for this target. The fourth field is a slash-notation netmask which will be used, during the discovery phase, to control the network addresses to which targets will be presented. The magic val- ues ``any'' and ``all'' will expand to be the same as ``0/0''. If an attempt is made to discover a target which is not allowed by the net- mask, a warning will be issued using syslog(3) to make administrators aware of this attempt. The administrator can still use tcp wrapper functionality, as found in hosts_access(5) and hosts.deny(5) to allow or deny discovery attempts from initiators as well as using the inbuilt netmask functionality. FILES
/etc/iscsi/targets the list of exported storage targets SEE ALSO
syslog(3), hosts.deny(5), hosts_access(5), iscsi-target(8) HISTORY
The targets file first appeared in NetBSD 4.0. BSD
December 18, 2007 BSD
All times are GMT -4. The time now is 06:44 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy