Sponsored Content
Operating Systems Solaris How to configure my SAN with Sun V880 servers to run Oracle 9i Post 80924 by 98_1LE on Saturday 13th of August 2005 10:14:26 PM
Old 08-13-2005
The internal disks in a V880 are on an internal loop, and cannot participate in the SAN.

As for configuring the "SAN", that depends on the actual storage array, switches, & HBA's that you have. Generally speaking you will need to format the parity groups into LDEV's, run fiber, present LUN's to a port, configure LUN security, then configure the zones on the switch. Then assuming you are using Sun HBA's (QLogic) with the leadville drivers, you `cfgadm -c configure c# c#` the controllers, and put a VTOC on the disks.

You can use either Veritas Volume Manager and file system, or Veritas storage foundation for Oracle which also comes in an HA flavor (VCS). Veritas brings some flexibility, simplicity, and performance, but is expensive. Sun Disk Suite (SDS) and the ufs file system ship with Solaris.

If you are hardware RAID 10 on your array, I would use concatonated volumes. I usually put the ORACLE_BASE/HOME undert /app/oracle, and put the database under /u01 thru /u05, and use /u06 for archive logs. If I need a local slice for backups, I put it at /app/backups.

If using vxfs, make sure to use the largest block size that equals or is a multiple of your db/tablespace block size.

All of my V880's only have 6 internal 72GB disks. I usually use two for the OS mirrored with SDS, and the other four for rootdg with pre 4.0 vxvm. With 4.x, I make them a disk group, and would put your Oracle binaries and indexes on those disks, and the rest of the database on the SAN.

Realize that if you put disks on the SAN and the internal disks in the same disk group/set, you may have errors on boot or in single user because the drivers for the SAN HBA's have not loaded, and only part of the disk group will be visible.

Be sure to put noatime as a mount option in the vfstab, and if using ufs, also logging.
 

9 More Discussions You Might Find Interesting

1. UNIX Benchmarks

Sun V880 Solaris 9

notes: System Configuration: Sun Microsystems sun4u Sun Fire 880 System clock frequency: 150 MHz Memory size: 8192 Megabytes Run E$ CPU CPU Brd CPU MHz MB Impl. Mask --- --- ---- ---- ------- ---- A 0 750 8.0 US-III 5.4 B 1 ... (0 Replies)
Discussion started by: tnorth
0 Replies

2. Solaris

Sun v440/v880 network adapter teaming

Is it possible to team two network adapters for fault tolerance in a Sun v440 or v880 Solaris 8 box? If so how would I go about doing so? (1 Reply)
Discussion started by: ghuber
1 Replies

3. Solaris

Sun V880

I have two mechine, one is SUN V880 and another is IBM p570. I run the same script on V880 and p570 , why the cpu usage always under 50% on V880 but the cpu usage can growth to 100% on p570. (4 Replies)
Discussion started by: golden
4 Replies

4. Solaris

Thoughts/experiences of SAN attaching V880 to EMC SAN

Hi everyone, I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN? At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies

5. UNIX for Dummies Questions & Answers

unable to telnet from win xp to sun fire v880 server

Hi everybody i am trying to telnet to sun server from win xp machine but getting message "x21 error connection closed by remote user". i had make changes in /etc/default/login & /etc/fuser file still i have not getting telnet access. my win xp machines ip is 10.205.121.177 & sun server ip is... (1 Reply)
Discussion started by: pshelke
1 Replies

6. Solaris

configure solaris 10 mail to relay email alerts from SAN switch

I want to be able to use an account on a solaris 10 server, eg root@myhost to act as a relay to forward mail to my domain account me@mycompany.com The reason for this is to configure root@myhost as a mail relay on Brocade SAN switches - so that when a port goes bad i get an email alert.... (0 Replies)
Discussion started by: wibidee
0 Replies

7. Solaris

How configure SAN to server?

Hi all, I have a server with Solaris version 9. I wanted to mount SAN storage to my server. I am new at this. Can anyone give me an overview on how to this. Any documents will be fine also. Please let me know if any information is required. Thanks in advance. Below shows the HBA port: ... (11 Replies)
Discussion started by: beginningDBA
11 Replies

8. Hardware

Help needed for non-SUN-HDD (original Seagate) in V880

Hello, we have a hardware error on a 36GB HDD within a SunFire V880. Disk seems to be really dead, cause changing the slot of the disk, moves the error along. So we bought a similar, but original) Seagte-Disk (on ebay) and screwed it in the HDD-Cage. Disk is now known in format, but is... (5 Replies)
Discussion started by: etomer_sk
5 Replies

9. Solaris

Could you run this benchmark? (Especially if you have a V880 or V890)

Hi all I am currently using a T5120 to write and run some simulation code I've been working on which heavily relies on large matrix multiplication procedures. I am posting this to call out for some of you run and share the results of a simple benchmark I've written to compare the matrix... (1 Reply)
Discussion started by: toguro123
1 Replies
vxdarestore(1M) 														   vxdarestore(1M)

NAME
vxdarestore - restore simple or nopriv disk access records SYNOPSIS
/etc/vx/bin/vxdarestore DESCRIPTION
The vxdarestore utility is used to restore persistent simple or nopriv disk access (da) records that have failed due to changing the naming scheme used by vxconfigd from c#t#d#-based to enclosure-based. The use of vxdarestore is required if you use the vxdiskadm command to change from the c#t#d#-based to the enclosure-based naming scheme. As a result, some existing persistent simple or nopriv disks go into the "error" state and the VxVM objects on those disks fail. vxdarestore may be used to restore the disk access records that have failed. The utility also recovers the VxVM objects on the failed disk access records. Note: vxdarestore may only be run when vxconfigd is using the enclosure-based naming scheme. Note: You can use the command vxdisk list da_name to discover whether a disk access record is persistent. The record is non-persistent if the flags field includes the flag autoconfig; otherwise it is persistent. The following sections describe how to use the vxdarestore utility under various conditions. Persistent Simple/Nopriv Disks in the rootdg Disk Group If all persistent simple or nopriv disks in the rootdg disk group go into the "error" state, use the following procedure: 1. Use the vxdiskadm command to change back to the c#t#d# based naming scheme. 2. Either shut down and reboot the host, or run the following command: vxconfigd -kr reset 3. If you want to use the enclosure-based naming scheme, add a non-persistent simple disk to the rootdg disk group, use vxdiskadm to change to the enclosure-based naming scheme, and then run vxdarestore. Note: If not all the disks in rootdg go into the error state, simply running vxdarestore restores those disks in the error state and the objects that that they contain. Persistent Simple/Nopriv Disks in Disk Groups other than rootdg If all disk access records in an imported disk group consist only of persistent simple and/or nopriv disks, the disk group is put in the "online dgdisabled" state after changing to the enclosure-based naming scheme. For such disk groups, perform the following steps: 1. Deport the disk group using the following command: vxdg deport diskgroup 2. Run the vxdarestore command. 3. Re-import the disk group using the following command: vxdg import diskgroup NOTES
Use of the vxdarestore command is not required in the following cases: o If there are no persistent simple or nopriv disk access records on an HP-UX host. o If all devices on which simple or nopriv disks are present are not automatically configurable by VxVM. For example, third-party drivers export devices that are not automatically configured by VxVM. VxVM objects on simple/nopriv disks created from such disks are not affected by switching to the enclosure based naming scheme. The vxdarestore command does not handle the following cases: o If the enclosure-based naming scheme is in use and the vxdmpadm command is used to change the name of an enclosure, the disk access names of all devices in that enclosure are also changed. As a result, any persistent simple/nopriv disks in the enclosure are put into the "error" state, and VxVM objects configured on those disks fail. o If the enclosure-based naming scheme is in use and the system is rebooted after making hardware configuration changes to the host. This may change the disk access names and cause some persistent simple/nopriv disks to be put into the "error" state. o If the enclosure-based naming scheme is in use, the device discovery layer claims some disks under the JBOD category, and the vxdd- ladm rmjbod command is used to remove remove support for the JBOD category for disks from a particular vendor. As a result of the consequent name change, disks with persistent disk access records are put into the "error" state, and VxVM objects configured on those disks fail. EXIT CODES
A zero exit status is returned if the operation is successful or if no actions were necessary. An exit status of 1 is returned if vxdare- store is run while vxconfigd is using the c#t#d# naming scheme. An exit status of 2 is returned if vxconfigd is not running. SEE ALSO
vxconfigd(1M), vxdg(1M), vxdisk(1M), vxdiskadm(1M), vxdmpadm(1M), vxintro(1M), vxreattach(1M), vxrecover(1M) VxVM 5.0.31.1 24 Mar 2008 vxdarestore(1M)
All times are GMT -4. The time now is 03:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy