Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Single LUN or multiple smaller LUNs for NFS sharing Post 302994806 by Peasant on Tuesday 28th of March 2017 10:49:30 AM
Old 03-28-2017
If you are using FC disks for that NFS server, i would advise using multiple luns.
Just add additional 3 x 1.5 TB, making it 4 x 1.5 TB zpool

ZFS will stripe the load (it's raid0 basically) across all disks in zpool.

With one monster lun you could hit queue depth problems.
One lun has final queue depth on storage side, afaik, regardless of the manufacturer.

On Solaris 11 the default queue depth is 256, which seems quite high for default, but applicable under some loads.

This won't be so noticeable on SSD or FMC modules, but doubtful you are using those for backups via NFS.
So i would go multiple luns same size as the first one you added in your situation.

Hope that helps
Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Sharing ISO images over NFS

I've got a bunch of application CDs that I use here at home under Wine. They are Windows applications and as such, some of them want to see the volume label in order for the application to run. So... just copying the CD-ROM contents to a directory doesn't work. With that in mind, what I've done... (1 Reply)
Discussion started by: deckard
1 Replies

2. Linux

Problem in sharing Symlink via NFS

Hi, I have created symlink under /. It is /latest Pointing to /home/users/neel_prog_V1.0. (Note: I have created this symlink so that when version get changed I will need to change only symlink instead of doing changes in /etc/exports.) I have shared this symlink with NFS. in /etc/exports I... (0 Replies)
Discussion started by: neel.gurjar
0 Replies

3. High Performance Computing

sharing web files via NFS mount

I'm planning to load balance 2 web servers. I'm considering mounting an NFS share between the 2 servers so they can share the data. My question is: does this setup offset some of the benefits of load balancing? although there are 2 web servers, they both still access a single NFS server. Wouldn't... (2 Replies)
Discussion started by: gaspol
2 Replies

4. AIX

Single command to change the attributes of all luns presented to an AIX host

Hi, I would like to know if there is a command similar to scsimgr in HP-UX that can help me change the algorithm and reserve_policy attributes of all luns presented to an AIX host. Otherwise I would have to use, chdev -l hdiskX -a algorithm=round_robin reserve_policy=no_reserve in a... (1 Reply)
Discussion started by: kanna_geekworkz
1 Replies

5. Solaris

NFS sharing across platforms

Hi, Not sure where to post this, I'm sorry and need suggestion if this is wrong section. we are running NFS Server on Solaris 10 and client box is HP-UX 11.11 (Can't apply latest patches/upgrades as in-house appln. doesn't allow) I am able to mount NFS shares from this Solaris box onto all... (4 Replies)
Discussion started by: prvnrk
4 Replies

6. AIX

Sharing lun between 2 LPAR

Hi, I would like to know how to share lun in between 2 lpars. I have run tbe below command on vio server. /usr/sbin/chdev -l hdisk1 -a reserve_policy=no_reserve after above command I am able to assign lun on one Lpar. but it is not visible to other lpar. even after this I have run... (3 Replies)
Discussion started by: manoj.solaris
3 Replies

7. UNIX for Dummies Questions & Answers

Program sharing on NFS

I am not sure is this plausible... I have just built a NFS master server on a solaris box (secured by NIS) And I have a dozen of virtual machines running 32 and 64 bits linux and windows, running different types of servers. I am not sure is it possible to move most of the programs to the... (6 Replies)
Discussion started by: xstaci
6 Replies

8. UNIX for Dummies Questions & Answers

Sharing a network mount preferably with NFS

Hello, I would like to share a 9p (Plan 9) passthrough/share to a VM via NFS (using a guest as an NFS server to share a directory from the host) At the moment I am getting the error message: 'exportfs: /share does not support NFS export'... (2 Replies)
Discussion started by: Scratch
2 Replies

9. Solaris

Mpathadm showing no paths to 1 lun, others luns are fine

Hi, I've noticed that mpathadm states that one of our luns has no active paths: /dev/rdsk/c6t60000970000298700009533031324333d0s2 Total Path Count: 4 Operational Path Count: 4 /dev/rdsk/c6t60000970000298700009533031333037d0s2 ... (3 Replies)
Discussion started by: badoshi
3 Replies

10. UNIX for Dummies Questions & Answers

Split files into smaller ones with 1000 hierarchies in a single file.

input file: AD,00,--,---,---,---,---,---,---,--,--,--- AM,000,---,---,---,---,---,--- AR, ,---,--,---,--- AA,---,---,---,--- AT,--- AU,---,---,--- AS,---,--- AP,---,---,--- AI,--- AD,00,---,---,---, ,---,---,---,---,---,--- AM,000,---,---,--- AR,... (6 Replies)
Discussion started by: kcdg859
6 Replies
BIOCTL(8)						    BSD System Manager's Manual 						 BIOCTL(8)

NAME
bioctl -- RAID management interface SYNOPSIS
bioctl device command [arg [...]] DESCRIPTION
RAID device drivers which support management functionality can register their services with the bio(4) driver. bioctl then can be used to manage the RAID controller's properties. COMMANDS
The following commands are supported: show [disks | volumes] Without any argument by default bioctl will show information about all volumes and the logical disks used on them. If disks is specified, only information about physical disks will be shown. If volumes is specified, only information about the volumes will be shown. alarm [disable | enable | silence | test] Control the RAID card's alarm functionality, if supported. By default if no argument is specified, its current state will be shown. Optionally the disable, enable, silence, or test arguments may be specified to enable, disable, silence, or test the RAID card's alarm. blink start channel:target.lun | stop channel:target.lun Instruct the device at channel:target.lun to start or cease blinking, if there's ses(4) support in the enclosure. hotspare add channel:target.lun | remove channel:target.lun Create or remove a hot-spare drive at location channel:target.lun. passthru add DISKID channel:target.lun | remove channel:target.lun Create or remove a pass-through device. The DISKID argument specifies the disk that will be used for the new device, and it will be created at the location channel:target.lun. NOTE: Removing a pass-through device that has a mounted filesys- tem will lead to undefined behaviour. check start VOLID | stop VOLID Start or stop consistency volume check in the volume with index VOLID. NOTE: Not many RAID controllers support this fea- ture. create volume VOLID DISKIDs [SIZE] STRIPE RAID_LEVEL channel:target.lun Create a volume at index VOLID. The DISKIDs argument will specify the first and last disk, i.e.: 0-3 will use the disks 0, 1, 2, and 3. The SIZE argument is optional and may be specified if not all available disk space is wanted (also dependent of the RAID_LEVEL). The volume will have a stripe size defined in the STRIPE argument and it will be located at channel:target.lun. remove volume VOLID channel:target.lun Remove a volume at index VOLID and located at channel:target.lun. NOTE: Removing a RAID volume that has a mounted filesystem will lead to undefined behaviour. EXAMPLES
The following command, executed from the command line, shows the status of the volumes and its logical disks on the RAID controller: $ bioctl arcmsr0 show Volume Status Size Device/Label RAID Level Stripe ================================================================= 0 Building 468G sd0 ARC-1210-VOL#00 RAID 6 128KB 0% done 0:0 Online 234G 0:0.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:1 Online 234G 0:1.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:2 Online 234G 0:2.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:3 Online 234G 0:3.0 noencl <WDC WD2500YS-01SHB1 20.06C06> To create a RAID 5 volume on the SCSI 0:15.0 location on the disks 0, 1, 2, and 3, with stripe size of 64Kb on the first volume ID, using all available free space on the disks: $ bioctl arcmsr0 create volume 0 0-3 64 5 0:15.0 To remove the volume 0 previously created at the SCSI 0:15.0 location: $ bioctl arcmsr0 remove volume 0 0:15.0 SEE ALSO
arcmsr(4), bio(4), cac(4), ciss(4), mfi(4) HISTORY
The bioctl command first appeared in OpenBSD 3.8, it was rewritten for NetBSD 5.0. AUTHORS
The bioctl interface was written by Marco Peereboom <marco@openbsd.org> and was rewritten with multiple features by Juan Romero Pardines <xtraeme@NetBSD.org>. BSD
March 16, 2008 BSD
All times are GMT -4. The time now is 03:28 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy