Oracle ASM on Solaris 11.3


 
Thread Tools Search this Thread
Operating Systems Solaris Oracle ASM on Solaris 11.3
# 1  
Old 04-11-2018
Oracle ASM on Solaris 11.3

Hi Folks,

I've just built a Solaris 11.3 server with a number of LDOM's, all good and well and not any serious issues so far.

On our older Solaris 10 systems we ran Oracle 11g using ASM disks, these disks were the raw device meta disks and came in very nicely as "/dev/md/rdsk/dnn" all worked very well and performed pretty well to boot.

I'm working with a fairly small database server where there were 10 ASM disks, this was almost 1Tb of space - so here is my question set.
  1. Is there an equivalent way of doing this using a zpool?
  2. Would I be better going zfs native for these disks, but how much of a performance hit would I take.
  3. Or should I just use the /dev/rdsk devices and absorb the admin load when it comes to database growth.

Just for some additional information, the Solaris server is connected to a VNX8300 via IBM SVC.

Regards

Gull04
# 2  
Old 04-11-2018
Quote:
Originally Posted by gull04
  1. Is there an equivalent way of doing this using a zpool?
  2. Would I be better going zfs native for these disks, but how much of a performance hit would I take.
  3. Or should I just use the /dev/rdsk devices and absorb the admin load when it comes to database growth.
I haven't done anything in Solaris since the time they switched from steam-powered machines to electrical ones, but i have suffered with ASM and DBAs requesting it a lot. Maybe this will be helpful to you:

There is virtually no such thing as a performance hit if the system is not extremely low on memory and/or you do some real-time processing of high loads. Under "normal" conditions there is enough memory left over for file cache so that the difference between a filesystem and a raw device is negligible. Everything else is just myths told from one DBA to the other. Anyway, give them whatever they want, it isn't worth the time to argue with DBA about technics - if they would understand it, they would be SysAdmins like us, wouldn't they?

With ASM you end with "anonymous" disks and nobody really is helped by such crap. About 10 years ago i had a system where several DB instances where running in parallel, al with ASM. The system had ~250 hdisk devices. Guess what? The DBAs finally managed to tell us the wrong hdisk number (something like "hdisk145" instead of "hdisk154") to delete and one DB came to a screeching halt. But since i had no method to find out what a certain disk is used for or if it is used at all (for the OS they all look unused) this is what had to happen sooner or later.

The compromise i set up was to create volume groups from the disks, create logical volumes in them and only give these out to the DB as raw device. Now, this was AIX and the Solaris wording is different, but as far as i know "zpool" is a sort-of LVM itself so you surely know how to translate that to proper Solaris procedures. Never give them disk devices as raw devices! Just give them volumes without a filesystem on it. Always make sure you have some control layer (via the LVM) to take care of things.

I hope this helps.

bakunin
# 3  
Old 04-12-2018
1.You can accomplish with zpool using ZVOLs
Would not recommend nor advise, unless used in lab.
Performance hit would be noticable.

2. Oracle database on ZFS filesystem (no ASM) will require a lot of tuning to even touch ASM performance.
Expect performance drop on multiTB databases, especially after fragmentation starts regardless of the above statement.
It can be close to ASM, with a lot more effort and zpool separation per oracle database component (redo, archive,data)

3. Use symlinks to link /dev/rdsk/cxtxdxs6 (or the slice selected ofc) to human device name to use in ASM.
Choosing a right naming policy will be of great help here.

Also, use one slice (say s6) for all ASM disk, don't mix, since this will make it harder to administer in case of problems or as management says these days challenges.

I see no need to use metadevices, since ASM is actually an oracle database volume manager supporting mirroring, striping by default etc.

There are some corner cases in which SVM can be useful.
Like, if you require a volume to be raid 5 or raid 6 protected and used by ASM, this can only be done using metadevices (AFAIK, ASM only support mirrors - two way, three way).
But i have never meet this need in practice, since i mostly use storage side luns which are added as is in ASM (partitioned and symlinked, without ASM protection)


Hope that helps
Regards
Peasant.
# 4  
Old 04-13-2018
Hi Guys,

Thanks very much for the feed back on this thread, the final decision was to go with the ASM raw devices - however there were a couple of things that I discovered that might be worth sharing.

The first point worth noting is that when you create the ASM disks in format, make sure that cylinder zero is not available in the slice as ASM can over write the label - which the OS seems to protest about. So the disks were formatted as follows;

Code:
partition> pr
Current partition table (original):
Total disk cylinders available: 2558 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       1 - 2557        4.99GB    (2557/0/0) 10473472
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wm       0 - 2557        5.00GB    (2558/0/0) 10477568
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0

partition>

A second point worth noting is that the disk should be commented in format to ensure that somebody doesn't grab it and re-format it.

Thirdly, it's worth giving the disks meaningful names when setting up the services as follows;

Code:
root@fvssphsun01:/export/home/e415243# ldm list-services
VCC
    NAME             LDOM             PORT-RANGE
    primary-vcc0     primary          5000-5100

VSW
    NAME             LDOM             MAC               NET-DEV   ID   DEVICE     LINKPROP   DEFAULT-VLAN-ID PVID VID                  MTU   MODE   INTER-VNET-LINK
    primary-vsw0     primary          00:14:4f:fa:b3:18 aggr0     0    switch@0              1               1                         1500         on
    primary-vsw1     primary          00:14:4f:f9:66:7a aggr1     1    switch@1              1               1                         1500         on

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          nhh_ba_vol1                                    /dev/rdsk/c0t600507680C808278000000000000048Fd0s0
                                      nhh_ba_vol2                                    /dev/rdsk/c0t600507680C8082780000000000000491d0s0
                                      nhh_db_vol1                                    /dev/rdsk/c0t600507680C808278000000000000048Ed0s0
                                      nhh_db_vol2                                    /dev/rdsk/c0t600507680C8082780000000000000490d0s0
                                      nhh_asm_01                                     /dev/rdsk/c0t600507680C8082780000000000000492d0s0
                                      nhh_asm_02                                     /dev/rdsk/c0t600507680C8082780000000000000493d0s0
                                      nhh_asm_03                                     /dev/rdsk/c0t600507680C8082780000000000000494d0s0
                                      nhh_asm_04                                     /dev/rdsk/c0t600507680C8082780000000000000495d0s0
                                      nhh_asm_05                                     /dev/rdsk/c0t600507680C8082780000000000000496d0s0
                                      nhh_asm_06                                     /dev/rdsk/c0t600507680C8082780000000000000497d0s0
                                      nhh_asm_07                                     /dev/rdsk/c0t600507680C8082780000000000000498d0s0
                                      nhh_asm_08                                     /dev/rdsk/c0t600507680C8082780000000000000499d0s0
                                      nhh_asm_09                                     /dev/rdsk/c0t600507680C808278000000000000049Ad0s0
                                      nhh_asm_10                                     /dev/rdsk/c0t600507680C808278000000000000049Bd0s0

root@fvssphsun01:/export/home/e415243# ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME
primary          active     -n-cv-  UART    16    32G      0.1%  0.1%  3d 43m
fbasphnhhp01     active     -n----  5001    16    32G      0.0%  0.0%  1d 7h 30m
fdbsphnhhp01     active     -n----  5000    48    96G      0.1%  0.0%  1d 7h 22m
root@fvssphsun01:/export/home/e415243#

Other things that I noticed were that OVM 3.5 is a long way from LDOMS for Solaris 1.0.1.

I eventually managed to get the local repo set up, not without incident and have the local LDOM's pointing at it.

Regards

Gull04
# 5  
Old 04-13-2018
I have also noticed that if you put /dev/dsk/ or /dev/rdsk inside vds, it's the same.

Starting from 0 will cause havoc, the ASM will overwrite the label in one time making the disk unusable to the operating system until labeled, possibly losing database data.

Do fall into primary-vds trap.
Having one disk service for a lot of disks can cause problems.
I've had issues with such configurations, with systems complain they are out of LDC channels and stuff.

Having fbasphnhhp01-vds and fdbsphnhhp01-vds will add readability and stability.
Naming stuff is really import in oracle VM, when things go wild it's easier to track.

Couple of more hints :

Keep the names of vdsdevs same as vdisks added to ldom.
Enumerate your vdisks with ldm add-vdisk id=N ...

I tend to leave first 0-10 IDs for system stuff (migration of rpools, swap/flash devices devices etc.), and higher numbers for asm devices.

Limit kernel memory consumption (effectively ARC cache)inside LDOM if you are using oracle database on ASM.
Code:
set user_reserve_hint=N # % of total memory reserved for SGA (application in your case)

So a 100GB system with 10GB rpool and oracle home will work nicely with with above value set to 90 (%), with SGA set 80GB, leaving rest for other operations if required.

Be sure to calculate your requirements, it a tunable variable that can be changed during runtime.

This is of course, a choice, which helps in a long run (fire and forget).

Hope that helps and i didn't bore you to death Smilie
Regards
Peasant
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

Missing ASM Disks in Solaris 11.3 LDOM

Hi Guys, Just a quick question hopefully someone will have seen this before and will be able to enlighten me. I have been doing some Infrastructure Verification Testing and one of the tests was booting the primary domain from alternate disks, this all went well - however on restarting one of... (7 Replies)
Discussion started by: gull04
7 Replies

2. UNIX for Advanced & Expert Users

Script to automate add/resize datafile in Oracle ASM Storage

Hi all, Could anyone please share ideas or logic on how to automate add/resize datafile in Oracle ASM Storage for Oracle RAC/Single Node databases when any tablespace space alert is trigerred by OEM? Thanks for your help and time on it! Thanks&regards, a1_win Please DON'T use CODE tags... (0 Replies)
Discussion started by: a1_win
0 Replies

3. Solaris

Can we add Oracle ASM in sun cluster3.3 or 4.0 in failover mode

Hi I am new to this forum & oracle DBA also, I would like to know that can we add Oracle ASM in failover mode in sun cluster 3.3 or 4.0 means that if suppose oracle is running along with ASM on node1 & this node went down due to hardware issue then both oracle along with ASM must move to... (1 Reply)
Discussion started by: hb00
1 Replies

4. Shell Programming and Scripting

Use perl to connect to Oracle ASM as sysdba

I am novice to perl. Can someone guide me on the below query. We have an existing perl script which connects to database to check the disk group status which i wanted to retieve directly from ASM rather than database. This is because, a cluster has more than 4 databases running and a check on... (1 Reply)
Discussion started by: sai_rsk
1 Replies

5. Programming

Help on a perl script to connect to oracle ASM as sysdba

I am novice to perl. Can someone guide me on the below query. We have an existing perl script which connects to database to check the disk group status which i wanted to retieve directly from ASM rather than database. This is because, a cluster has more than 4 databases running and a check on... (0 Replies)
Discussion started by: sai_rsk
0 Replies

6. AIX

Oracle ASM accidentally messed with my hdisk

I have AIX 5.3 with oracle 10g ( test server). While trying to create RAW disk for Oracle ASM I have accidentally messed with rootvg (hdisk0 & hdisk1) When I do # lspv hdisk0 0516-066 : Physical volume is not a volume group member. Check the physical volume name specified. ... (4 Replies)
Discussion started by: George_Samaan
4 Replies

7. Solaris

Command to format Oracle ASM raw disk

Hi, Can any one please provide the command to format an Oracle Raw Disk in Solaris 10. I have been used the following commands: dd if=/dev/zero of=<raw disk path> Thanks ---------- Post updated at 12:20 PM ---------- Previous update was at 10:11 AM ---------- Well this didn't give... (0 Replies)
Discussion started by: Mack1982
0 Replies

8. Red Hat

ORACLE RAC ASM disk question

Perhaps someone here has some experience with this. machine os RHE 4 6 oracle 10g RAC disk is SAN attached clariion. I have presented new disks to the host, so the machine sees all needed LUNS. powermt shows them, they are labeled and i have fdisk'd them. They are visible across all RAC... (5 Replies)
Discussion started by: Eronysis
5 Replies

9. Solaris

asm vs disksuite for oracle

I'm running solaris, with solstice disksuite. With other systems, i run veritas volume manager. My dba want to implement ASM with oracle 10g. Is it possible to create volumes with disksuite for ASM. Oracle want a volume name ex: vol1 My question is, what is the best STANDARD solution. ... (5 Replies)
Discussion started by: simquest
5 Replies
Login or Register to Ask a Question