Sponsored Content
Full Discussion: Oracle ASM on Solaris 11.3
Operating Systems Solaris Oracle ASM on Solaris 11.3 Post 303015756 by bakunin on Wednesday 11th of April 2018 05:35:30 PM
Old 04-11-2018
Quote:
Originally Posted by gull04
  1. Is there an equivalent way of doing this using a zpool?
  2. Would I be better going zfs native for these disks, but how much of a performance hit would I take.
  3. Or should I just use the /dev/rdsk devices and absorb the admin load when it comes to database growth.
I haven't done anything in Solaris since the time they switched from steam-powered machines to electrical ones, but i have suffered with ASM and DBAs requesting it a lot. Maybe this will be helpful to you:

There is virtually no such thing as a performance hit if the system is not extremely low on memory and/or you do some real-time processing of high loads. Under "normal" conditions there is enough memory left over for file cache so that the difference between a filesystem and a raw device is negligible. Everything else is just myths told from one DBA to the other. Anyway, give them whatever they want, it isn't worth the time to argue with DBA about technics - if they would understand it, they would be SysAdmins like us, wouldn't they?

With ASM you end with "anonymous" disks and nobody really is helped by such crap. About 10 years ago i had a system where several DB instances where running in parallel, al with ASM. The system had ~250 hdisk devices. Guess what? The DBAs finally managed to tell us the wrong hdisk number (something like "hdisk145" instead of "hdisk154") to delete and one DB came to a screeching halt. But since i had no method to find out what a certain disk is used for or if it is used at all (for the OS they all look unused) this is what had to happen sooner or later.

The compromise i set up was to create volume groups from the disks, create logical volumes in them and only give these out to the DB as raw device. Now, this was AIX and the Solaris wording is different, but as far as i know "zpool" is a sort-of LVM itself so you surely know how to translate that to proper Solaris procedures. Never give them disk devices as raw devices! Just give them volumes without a filesystem on it. Always make sure you have some control layer (via the LVM) to take care of things.

I hope this helps.

bakunin
 

9 More Discussions You Might Find Interesting

1. Solaris

asm vs disksuite for oracle

I'm running solaris, with solstice disksuite. With other systems, i run veritas volume manager. My dba want to implement ASM with oracle 10g. Is it possible to create volumes with disksuite for ASM. Oracle want a volume name ex: vol1 My question is, what is the best STANDARD solution. ... (5 Replies)
Discussion started by: simquest
5 Replies

2. Red Hat

ORACLE RAC ASM disk question

Perhaps someone here has some experience with this. machine os RHE 4 6 oracle 10g RAC disk is SAN attached clariion. I have presented new disks to the host, so the machine sees all needed LUNS. powermt shows them, they are labeled and i have fdisk'd them. They are visible across all RAC... (5 Replies)
Discussion started by: Eronysis
5 Replies

3. Solaris

Command to format Oracle ASM raw disk

Hi, Can any one please provide the command to format an Oracle Raw Disk in Solaris 10. I have been used the following commands: dd if=/dev/zero of=<raw disk path> Thanks ---------- Post updated at 12:20 PM ---------- Previous update was at 10:11 AM ---------- Well this didn't give... (0 Replies)
Discussion started by: Mack1982
0 Replies

4. AIX

Oracle ASM accidentally messed with my hdisk

I have AIX 5.3 with oracle 10g ( test server). While trying to create RAW disk for Oracle ASM I have accidentally messed with rootvg (hdisk0 & hdisk1) When I do # lspv hdisk0 0516-066 : Physical volume is not a volume group member. Check the physical volume name specified. ... (4 Replies)
Discussion started by: George_Samaan
4 Replies

5. Programming

Help on a perl script to connect to oracle ASM as sysdba

I am novice to perl. Can someone guide me on the below query. We have an existing perl script which connects to database to check the disk group status which i wanted to retieve directly from ASM rather than database. This is because, a cluster has more than 4 databases running and a check on... (0 Replies)
Discussion started by: sai_rsk
0 Replies

6. Shell Programming and Scripting

Use perl to connect to Oracle ASM as sysdba

I am novice to perl. Can someone guide me on the below query. We have an existing perl script which connects to database to check the disk group status which i wanted to retieve directly from ASM rather than database. This is because, a cluster has more than 4 databases running and a check on... (1 Reply)
Discussion started by: sai_rsk
1 Replies

7. Solaris

Can we add Oracle ASM in sun cluster3.3 or 4.0 in failover mode

Hi I am new to this forum & oracle DBA also, I would like to know that can we add Oracle ASM in failover mode in sun cluster 3.3 or 4.0 means that if suppose oracle is running along with ASM on node1 & this node went down due to hardware issue then both oracle along with ASM must move to... (1 Reply)
Discussion started by: hb00
1 Replies

8. UNIX for Advanced & Expert Users

Script to automate add/resize datafile in Oracle ASM Storage

Hi all, Could anyone please share ideas or logic on how to automate add/resize datafile in Oracle ASM Storage for Oracle RAC/Single Node databases when any tablespace space alert is trigerred by OEM? Thanks for your help and time on it! Thanks&regards, a1_win Please DON'T use CODE tags... (0 Replies)
Discussion started by: a1_win
0 Replies

9. Solaris

Missing ASM Disks in Solaris 11.3 LDOM

Hi Guys, Just a quick question hopefully someone will have seen this before and will be able to enlighten me. I have been doing some Infrastructure Verification Testing and one of the tests was booting the primary domain from alternate disks, this all went well - however on restarting one of... (7 Replies)
Discussion started by: gull04
7 Replies
did(7)						     Sun Cluster Device and Network Interfaces						    did(7)

NAME
did - user configurable disk id driver DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. Disk ID (DID) is a user configurable pseudo device driver that provides access to underlying disk, tape, and CDROM devices. When the device supports unique device ids, multiple paths to a device are determined according to the device id of the device. Even if multiple paths are available with the same device id, only one DID name is given to the actual device. In a clustered environment, a particular physical device will have the same DID name regardless of its connectivity to more than one host or controller. This, however, is only true of devices that support a global unique device identifier such as physical disks. DID maintains parallel directories for each type of device that it manages under /dev/did. The devices in these directories behave the same as their non-DID counterparts. This includes maintaining slices for disk and CDROM devices as well as names for different tape device behaviors. Both raw and block device access is also supported for disks by means of /dev/did/rdsk and /dev/did/rdsk. At any point in time, I/O is only supported down one path to the device. No multipathing support is currently available through DID. Before a DID device can be used, it must first be initialized by means of the scdidadm(1M) command. IOCTLS
The DID driver maintains an admin node as well as nodes for each DID device minor. No user ioctls are supported by the admin node. The DKIOCINFO ioctl is supported when called against the DID device nodes such as /dev/did/rdsk/d0s2. All other ioctls are passed directly to the driver below. FILES
/dev/did/dsk/dnsm block disk or CDROM device, where n is the device number and m is the slice number /dev/did/rdsk/dnsm raw disk or CDROM device, where n is the device number and m is the slice number /dev/did/rmt/n tape device , where n is the device number /dev/did/admin administrative device /kernel/drv/did driver module /kernel/drv/did.conf driver configuration file /etc/did.conf scdidadm configuration file for non-clustered systems Cluster Configuration Repository (CCscdidadm(1M) maintains configuration in the CCR for clustered systems SEE ALSO
devfsadm(1M), Intro(1CL), cldevice(1CL), scdidadm(1M) NOTES
DID creates names for devices in groups, in order to decrease the overhead during device hot-plug. For disks, device names are created in /dev/did/dsk and /dev/did/rdsk in groups of 100 disks at a time. For tapes, device names are created in /dev/did/rmt in groups of 10 tapes at a time. If more devices are added to the cluster than are handled by the current names, another group will be created. Sun Cluster 3.2 24 April 2001 did(7)
All times are GMT -4. The time now is 11:34 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy