I've just built a Solaris 11.3 server with a number of LDOM's, all good and well and not any serious issues so far.
On our older Solaris 10 systems we ran Oracle 11g using ASM disks, these disks were the raw device meta disks and came in very nicely as "/dev/md/rdsk/dnn" all worked very well and performed pretty well to boot.
I'm working with a fairly small database server where there were 10 ASM disks, this was almost 1Tb of space - so here is my question set.
Is there an equivalent way of doing this using a zpool?
Would I be better going zfs native for these disks, but how much of a performance hit would I take.
Or should I just use the /dev/rdsk devices and absorb the admin load when it comes to database growth.
Just for some additional information, the Solaris server is connected to a VNX8300 via IBM SVC.
Is there an equivalent way of doing this using a zpool?
Would I be better going zfs native for these disks, but how much of a performance hit would I take.
Or should I just use the /dev/rdsk devices and absorb the admin load when it comes to database growth.
I haven't done anything in Solaris since the time they switched from steam-powered machines to electrical ones, but i have suffered with ASM and DBAs requesting it a lot. Maybe this will be helpful to you:
There is virtually no such thing as a performance hit if the system is not extremely low on memory and/or you do some real-time processing of high loads. Under "normal" conditions there is enough memory left over for file cache so that the difference between a filesystem and a raw device is negligible. Everything else is just myths told from one DBA to the other. Anyway, give them whatever they want, it isn't worth the time to argue with DBA about technics - if they would understand it, they would be SysAdmins like us, wouldn't they?
With ASM you end with "anonymous" disks and nobody really is helped by such crap. About 10 years ago i had a system where several DB instances where running in parallel, al with ASM. The system had ~250 hdisk devices. Guess what? The DBAs finally managed to tell us the wrong hdisk number (something like "hdisk145" instead of "hdisk154") to delete and one DB came to a screeching halt. But since i had no method to find out what a certain disk is used for or if it is used at all (for the OS they all look unused) this is what had to happen sooner or later.
The compromise i set up was to create volume groups from the disks, create logical volumes in them and only give these out to the DB as raw device. Now, this was AIX and the Solaris wording is different, but as far as i know "zpool" is a sort-of LVM itself so you surely know how to translate that to proper Solaris procedures. Never give them disk devices as raw devices! Just give them volumes without a filesystem on it. Always make sure you have some control layer (via the LVM) to take care of things.
1.You can accomplish with zpool using ZVOLs
Would not recommend nor advise, unless used in lab.
Performance hit would be noticable.
2. Oracle database on ZFS filesystem (no ASM) will require a lot of tuning to even touch ASM performance.
Expect performance drop on multiTB databases, especially after fragmentation starts regardless of the above statement.
It can be close to ASM, with a lot more effort and zpool separation per oracle database component (redo, archive,data)
3. Use symlinks to link /dev/rdsk/cxtxdxs6 (or the slice selected ofc) to human device name to use in ASM.
Choosing a right naming policy will be of great help here.
Also, use one slice (say s6) for all ASM disk, don't mix, since this will make it harder to administer in case of problems or as management says these days challenges.
I see no need to use metadevices, since ASM is actually an oracle database volume manager supporting mirroring, striping by default etc.
There are some corner cases in which SVM can be useful.
Like, if you require a volume to be raid 5 or raid 6 protected and used by ASM, this can only be done using metadevices (AFAIK, ASM only support mirrors - two way, three way).
But i have never meet this need in practice, since i mostly use storage side luns which are added as is in ASM (partitioned and symlinked, without ASM protection)
Thanks very much for the feed back on this thread, the final decision was to go with the ASM raw devices - however there were a couple of things that I discovered that might be worth sharing.
The first point worth noting is that when you create the ASM disks in format, make sure that cylinder zero is not available in the slice as ASM can over write the label - which the OS seems to protest about. So the disks were formatted as follows;
A second point worth noting is that the disk should be commented in format to ensure that somebody doesn't grab it and re-format it.
Thirdly, it's worth giving the disks meaningful names when setting up the services as follows;
Other things that I noticed were that OVM 3.5 is a long way from LDOMS for Solaris 1.0.1.
I eventually managed to get the local repo set up, not without incident and have the local LDOM's pointing at it.
I have also noticed that if you put /dev/dsk/ or /dev/rdsk inside vds, it's the same.
Starting from 0 will cause havoc, the ASM will overwrite the label in one time making the disk unusable to the operating system until labeled, possibly losing database data.
Do fall into primary-vdstrap.
Having one disk service for a lot of disks can cause problems.
I've had issues with such configurations, with systems complain they are out of LDC channels and stuff.
Having fbasphnhhp01-vds and fdbsphnhhp01-vds will add readability and stability.
Naming stuff is really import in oracle VM, when things go wild it's easier to track.
Couple of more hints :
Keep the names of vdsdevs same as vdisks added to ldom.
Enumerate your vdisks with ldm add-vdisk id=N ...
I tend to leave first 0-10 IDs for system stuff (migration of rpools, swap/flash devices devices etc.), and higher numbers for asm devices.
Limit kernel memory consumption (effectively ARC cache)inside LDOM if you are using oracle database on ASM.
So a 100GB system with 10GB rpool and oracle home will work nicely with with above value set to 90 (%), with SGA set 80GB, leaving rest for other operations if required.
Be sure to calculate your requirements, it a tunable variable that can be changed during runtime.
This is of course, a choice, which helps in a long run (fire and forget).
Hope that helps and i didn't bore you to death
Regards
Peasant
Hi Guys,
Just a quick question hopefully someone will have seen this before and will be able to enlighten me.
I have been doing some Infrastructure Verification Testing and one of the tests was booting the primary domain from alternate disks, this all went well - however on restarting one of... (7 Replies)
Hi all,
Could anyone please share ideas or logic on how to automate add/resize datafile in Oracle ASM Storage for Oracle RAC/Single Node databases when any tablespace space alert is trigerred by OEM?
Thanks for your help and time on it!
Thanks®ards,
a1_win
Please DON'T use CODE tags... (0 Replies)
Hi
I am new to this forum & oracle DBA also, I would like to know that can we add Oracle ASM in failover mode in sun cluster 3.3 or 4.0 means that if suppose oracle is running along with ASM on node1 & this node went down due to hardware issue then both oracle along with ASM must move to... (1 Reply)
I am novice to perl. Can someone guide me on the below query. We have an existing perl script which connects to database to check the disk group status which i wanted to retieve directly from ASM rather than database.
This is because, a cluster has more than 4 databases running and a check on... (1 Reply)
I am novice to perl. Can someone guide me on the below query. We have an existing perl script which connects to database to check the disk group status which i wanted to retieve directly from ASM rather than database.
This is because, a cluster has more than 4 databases running and a check on... (0 Replies)
I have AIX 5.3 with oracle 10g ( test server). While trying to create RAW disk for Oracle ASM I have accidentally messed with rootvg (hdisk0 & hdisk1)
When I do
# lspv hdisk0
0516-066 : Physical volume is not a volume group member.
Check the physical volume name specified. ... (4 Replies)
Hi,
Can any one please provide the command to format an Oracle Raw Disk in Solaris 10.
I have been used the following commands:
dd if=/dev/zero of=<raw disk path>
Thanks
---------- Post updated at 12:20 PM ---------- Previous update was at 10:11 AM ----------
Well this didn't give... (0 Replies)
Perhaps someone here has some experience with this.
machine os RHE 4 6
oracle 10g RAC
disk is SAN attached clariion.
I have presented new disks to the host, so the machine sees all needed LUNS. powermt shows them, they are labeled and i have fdisk'd them.
They are visible across all RAC... (5 Replies)
I'm running solaris, with solstice disksuite.
With other systems, i run veritas volume manager.
My dba want to implement ASM with oracle 10g.
Is it possible to create volumes with disksuite for ASM.
Oracle want a volume name ex: vol1
My question is, what is the best STANDARD solution.
... (5 Replies)