04-19-2018
Exporting physical disk to ldom or ZFS volume
Generally, this is what we do:-
- On primary, export 2 LUNs (add-vdsdev).
- On primary, assign these disks to the ldom in question (add-vdisk).
- On ldom, created mirrored zpool from these two disks.
On one server (which is older) we have:-
- On primary, create mirrored zpool from the two LUNs.
- On primary, assign this zfs disk to the ldom in question (add-vdisk).
- On ldom, create single disk zpool from this.
On this one server we're getting performance issues. I've heard that doing it this way (i.e. mirrored zfs on primary and exporting this) is not the best way to do it and will create an overhead?
Is this true? Or doesn't it make any difference?
Last edited by rbatte1; 04-19-2018 at 06:50 AM..
Reason: Converted textual numberd lists to formatted numbered lists with LIST=1 tags
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello,
I need explanations about physical disks and physical volumes. What is the difference between these 2 things?
In fact, i am trying to understand what the AIX lspv2command does.
Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies
2. UNIX for Dummies Questions & Answers
I was in smit, checking on disc space, etc. and it appears that one of our physical volumes that is part of a large volume group, has no free physical partitions. The server is running AIX 5.1. What would be the advisable step to take in this instance? (9 Replies)
Discussion started by: markper
9 Replies
3. AIX
Hi!
Can anyone help me on how I can do a basic check on the Unix filesystems / physical volumes and logical volumes?
What items should I check, like where do I look at in smit? Or are there commands that I should execute?
I need to do this as I was informed by IBM that there seems to be... (1 Reply)
Discussion started by: chipahoys
1 Replies
4. HP-UX
Hi,
Someone please help me with how i can unmount and remove all the files systems from a cluster. This is being shared by two servers that are active_standby. (3 Replies)
Discussion started by: joeli
3 Replies
5. Solaris
Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain
VDS
NAME LDOM VOLUME DEVICE
primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso
cdrom ... (16 Replies)
Discussion started by: fugitive
16 Replies
6. UNIX for Dummies Questions & Answers
Hi,
I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies
1)Physical Volume
2)Volume Group
3)Logical Volume
4)Physical Partition
Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies
7. Solaris
I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware.
I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror.
Are there any... (1 Reply)
Discussion started by: Metasin
1 Replies
8. Linux
When installing Linux, I choose some default setting to use all the disk space.
My server has a single internal 250Gb SCSI disk. By default the install appears to have created 3 logical volumes
lv_root, lv_home and lv_swap.
fdisk -l shows the following
lab3.nms:/dev>fdisk -l
Disk... (2 Replies)
Discussion started by: jimthompson
2 Replies
9. AIX
I want to remove hdisk1 from volume group diskpool_4 and migrate PV from hdisk1 to hdisk2 , but facing problems, so what is the quickest way to migratepv and remove hdisk1 --
# lspv | grep diskpool_4
hdisk1 00c7780e2e21ec86 diskpool_4 active
hdisk2 ... (2 Replies)
Discussion started by: filosophizer
2 Replies
10. Solaris
I have a guest LDOM running Solaris 10U11 on a Sun T4-1 host running Solaris 11.4. The host has a disk named bkpool that I'd like to share with the LDOM so both can read and write it. The host is hemlock, the guest is sol10.
root@hemlock:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP ... (3 Replies)
Discussion started by: Michele31416
3 Replies
LEARN ABOUT CENTOS
scsi-rescan
RESCAN-SCSI-BUS.SH(1) User Commands RESCAN-SCSI-BUS.SH(1)
NAME
rescan-scsi-bus.sh - script for adding and removing SCSI devices without rebooting
SYNOPSIS
rescan-scsi-bus.sh [options] [host [host ...]]
OPTIONS
-a, --alltargets
scan all targets, not just currently existing [default: disabled]
-d enable debug [default: 0]
-l activates scanning for LUNs 0--7 [default: 0]
-L NUM activates scanning for LUNs 0--NUM [default: 0]
-w, --wide
scan for target device IDs 0--15 [default: 0--7]
-c enables scanning of channels 0 1 [default: 0 / all detected ones]
-r, --remove
enables removing of devices [default: disabled]
-f, --flush
flush failed multipath devices [default: disabled]
-i, --issue-lip
issue a FibreChannel LIP reset [default: disabled]
-u, --update
look for existing disks that have been remapped
-s, --resize
look for resized disks and reload associated multipath devices, if applicable
--forcerescan
rescan existing devices
--forceremove
remove and readd every device (DANGEROUS)
--nooptscan
don't stop looking for LUNs is 0 is not found
--color
use coloured prefixes OLD/NEW/DEL
--hosts=LIST
scan only host(s) in LIST
--channels=LIST
scan only channel(s) in LIST
--ids=LIST
scan only target ID(s) in LIST
--luns=LIST
scan only lun(s) in LIST
--sync, --nosync
issue a sync / no sync [default: sync if remove]
--attachpq3
tell kernel to attach sg to LUN 0 that reports PQ=3
--reportlun2
tell kernel to try REPORT_LUN even on SCSI2 devices
--largelun
tell kernel to support LUNs > 7 even on SCSI2 devs
--sparselun
tell kernel to support sparse LUN numbering
Host numbers may thus be specified either directly on cmd line (deprecated) or or with the --hosts=LIST parameter (recommended).
LIST: A[-B][,C[-D]]... is a comma separated list of single values and ranges (No spaces allowed.)
SEE ALSO
rescan-scsi-bus.sh Homepage: http://www.garloff.de/kurt/linux/#rescan-scsi
sg3_utils Homepage: http://sg.danny.cz/sg
rescan-scsi-bus.sh 1.57 leden 2014 RESCAN-SCSI-BUS.SH(1)