Sponsored Content
Top Forums UNIX for Advanced & Expert Users Sun M5000 servers diaster recovery plan Post 302523429 by rbadillarx on Thursday 19th of May 2011 12:37:35 AM
Old 05-19-2011
Hi aixlover:
Take your time, also keep in mind that with 1 storage you have to use double luns to acomplish DR plan and nothing will help if the storage fails. If your storage have capabilities like shadow/lun cloning it will be easier (and faster).

And I don't expect nothing beyond a great feedback of ideas; so a big thanks is good enough as diamonds. So don't worry.

PD: Standby BD; it's call poor mans Oracle Dataguard. So the main DB is in archive mode; that archive are transported/copied to a Standby BD (another machine and BD instance which is off; until you have to load the archives) depending of your needs;
the Standby is just 1minute off the main BD.
This User Gave Thanks to rbadillarx For This Post:
 

8 More Discussions You Might Find Interesting

1. Solaris

Sun Solaris Disaster Recovery Plan ...

Hi all, I'm new in this domain, and on this operating system, but in my job, i have to find a way to make a disaster recovery plan for our server (sun solaris) which has the oracle database. I don't have any standby servers to be used for data replication, i only want to use CD's to put on... (4 Replies)
Discussion started by: sam212
4 Replies

2. Solaris

M5000 servers - XSCF functions

Hi all A couple of new M5000 servers will be arriving soon and I need to work out how to configure the underlying domains. It will come with 4 x CPUS, 32 Gb of memory, 4 x HD, 2 x IOtrays. Reading the XSCF manual, I can configure the item into 2 domains, something about uni / quad xsb.... (9 Replies)
Discussion started by: sbk1972
9 Replies

3. AIX

Plan to shutdown servers

Hello everyone I need to shutdown all my servers and my storage. I would like to hear your opinions about this. This is my little plan about all this. 1.-Stop the applications 2.-Stop the webservers 3.-Stop the ihs 4.-Stop the databases 5.-Verify no process are running 6.-Close the... (1 Reply)
Discussion started by: lo-lp-kl
1 Replies

4. Solaris

XSCF prompt disappeared, Sun M5000

Hi, I've got an issue here: After I logon to the xscf prompt of this Sun M5000 and did 'XSCF> version -c xcp', the xscf prompt disappeared. I can't get it back and can't log out. exit rebootxscf logout #. #> #> ~# ~# exit sendbreak exit I tried to set the Mode Switch to the service... (3 Replies)
Discussion started by: aixlover
3 Replies

5. UNIX for Advanced & Expert Users

Sun M5000 hardware domain creation - How to keep current OS?

Hi, This Sun M5000 currently has a single domain (0) created. Solaris 10 is installed on this domain with some local zones. Now we need to add a secondary hardware domain (1) to this M5000. My question: Can we keep the current OS settings on domain 0 when adding the secondary domain 1? ... (0 Replies)
Discussion started by: aixlover
0 Replies

6. Solaris

Sun M5000 hardware domain creation - How to keep current OS?

Hi, This Sun M5000 currently has a single domain (0) created. Solaris 10 is installed on this domain with some local zones. Now we need to add a secondary hardware domain (1) to this M5000. My question: Can we keep the current OS settings on domain 0 when adding the secondary domain 1? Thank... (5 Replies)
Discussion started by: aixlover
5 Replies

7. Solaris

Sun M5000, xscf showdevices questions

Hi, I've configured this Sun M5000 to use two system domains (added a second domain to it). The following is the info about the settings. The showdevices command can't show devices on domain 1 because Solaris is not loaded on it yet. My question: How can I confirm the hardware devices such as... (5 Replies)
Discussion started by: aixlover
5 Replies

8. Hardware

Sun/Oracle M5000 Question

I have an M5K with disk issues I'm trying to troubleshoot, and it's causing me some confusion. I know the host has four on-board HDDs, but the format command's output shows c0t0d0, c0t1d0, c3t0d0, and c3t1d0. cfgadm -al shows c0t3d0 as the DVD-ROM drive, and c1 and c2 as SAN disks. Confused yet? ... (2 Replies)
Discussion started by: desertdenizen
2 Replies
SUNW.HAStoragePlus(5)					     Sun Cluster Miscellaneous					     SUNW.HAStoragePlus(5)

NAME
SUNW.HAStoragePlus - resource type that enforces dependencies between Sun Cluster device services, file systems, and data services DESCRIPTION
SUNW.HAStoragePlus describes a resource type that enables you to specify dependencies between data service resources and device groups, cluster file systems, and local file systems. Note - Local file systems include the UNIX File System (UFS), Quick File System (QFS), Veritas File System (VxFS), and Solaris ZFS (Zettabyte File System). This resource type enables you to bring data services online only after their dependent device groups and file systems are guaranteed to be available. The SUNW.HAStoragePlus resource type also provides support for mounting, unmounting, and checking file systems. Resource groups by themselves do not provide for direct synchronization with disk device groups, cluster file systems, or local file sys- tems. As a result, during a cluster reboot or failover, an attempt to start a data service can occur while its dependent global devices and file systems are still unavailable. Consequently, the data service's START method might time out, and your data service might fail. The SUNW.HAStoragePlus resource type represents the device groups, cluster, and local file systems that are to be used by one or more data service resources. You add a resource of type SUNW.HAStoragePlus to a resource group and set up dependencies between other resources and the SUNW.HAStoragePlus resource. These dependencies ensure that the data service resources are brought online after the following situa- tions occur: 1. All specified device services are available (and collocated, if necessary). 2. All specified file systems are checked and mounted. You can also use the SUNW.HAStoragePlus resource type to access a local file system from a non-global zone. EXTENSION PROPERTIES
The following extension properties are associated with the SUNW.HAStoragePlus resource type: AffinityOn Specifies whether a SUNW.HAStoragePlus resource needs to perform an affinity switchover for all global devices that are defined in the GlobalDevicePaths and FilesystemMountPoints extension properties. You can specify TRUE or FALSE. Affinity switchover is set by default, that is, AffinityOn is set to TRUE. The Zpools extension property ignores the AffinityOn extension property. The AffinityOn extension property is intended for use with the GlobalDevicePaths and FilesystemMountPoints extension properties only. When you set the AffinityOn extension property to FALSE, the SUNW.HAStoragePlus resource passively waits for the specified global ser- vices to become available. In this case, the primary node or zone of each online global device service might not be the same node or zone that is the primary node for the resource group. The purpose of an affinity switchover is to enhance performance by ensuring the colocation of the device groups and the resource groups on a specific node or zone. Data reads and writes always occur over the device primary paths. Affinity switchovers require the poten- tial primary node list for the resource group and the node list for the device group to be equivalent. The SUNW.HAStoragePlus resource performs an affinity switchover for each device service only once, that is, when the SUNW.HAStoragePlus resource is brought online. The setting of the AffinityOn flag is ignored for scalable services. Affinity switchovers are not possible with scalable resource groups. FilesystemCheckCommand Overrides the check that SUNW.HAStoragePlus conducts on each unmounted file system before attempting to mount it. You can specify an alternate command string or executable, which is invoked on all unmounted file systems. When a SUNW.HAStoragePlus resource is configured in a scalable resource group, the file-system check on each unmounted cluster file system is omitted. The default value for the FilesystemCheckCommand extension property is NULL. When you set this extension property to NULL, Sun Cluster checks UFS or VxFS by issuing the /usr/sbin/fsck -o p command. Sun Cluster checks other file systems by issuing the /usr/sbin/fsck com- mand. When you set the FilesystemCheckCommand extension property to another command string, SUNW.HAStoragePlus invokes this command string with the file system mount point as an argument. You can specify any arbitrary executable in this manner. A nonzero return value is treated as an error that occurred during the file system check operation. This error causes the START method to fail. When you do not require a file system check operation, set the FilesystemCheckCommand extension property to /bin/true. FilesystemMountPoints Specifies a list of valid file system mount points. You can specify global or local file systems. Global file systems are accessible from all nodes or zones in a cluster. Local file systems are accessible from a single cluster node or zone. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node or zone. These local file systems require the underlying devices to be Sun Cluster global devices. These file system mount points are defined in the format paths[,...]. You can specify both the path in a non-global zone and the path in a global zone, in this format: Non-GlobalZonePath:GlobalZonePath The global zone path is optional. If you do not specify a global zone path, Sun Cluster assumes that the path in the non-global zone and in the global zone are the same. If you specify the path as Non-GlobalZonePath:GlobalZonePath, you must specify GlobalZonePath in the global zone's /etc/vfstab. The default setting for this property is an empty list. You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone by mounting the file system in the global zone. The resource type then per- forms a loopback mount in the non-global zone. Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all global zones. The SUNW.HAS- toragePlus resource type does not check /etc/vfstab in non-global zones. SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You can specify both local and global file system mount points at the same time. Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to be local if its /etc/vfstab entry satisfies both of the following conditions: 1. The non-global mount option is specified. 2. The "mount at boot" field for the entry is set to "no." A Solaris ZFS is always a local file system. Do not list a ZFS in /etc/vfstab. Also, do not include ZFS mount points in the Filesystem- MountPoints property. GlobalDevicePaths Specifies a list of valid global device group names or global device paths. The paths are defined in the format paths[,...]. The default setting for this property is an empty list. Zpools Specifies a list of valid ZFS storage pools, each of which contains at least one ZFS. These ZFS storage pools are defined in the format paths[,...]. The default setting for this property is an empty list. All file systems in a ZFS storage pool are mounted and unmounted together. The Zpools extension property enables you to specify ZFS storage pools. The devices that make up a ZFS storage pool must be accessible from all the nodes or zones that are configured in the node list of the resource group to which a SUNW.HAStoragePlus resource belongs. A SUNW.HAStoragePlus resource that manages a ZFS storage pool can only belong to a failover resource group. When a SUNW.HAStoragePlus resource that manages a ZFS storage pool is brought online, the ZFS storage pool is imported, and every file system that the ZFS stor- age pool contains is mounted. When the resource is taken offline on a node, for each managed ZFS storage pool, all file systems are unmounted and the ZFS storage pool is exported. Note - SUNW.HAStoragePlus does not support file systems created on ZFS volumes. ZpoolsSearchDir Specifies the location to search for the devices of Zpools. The default value for the ZpoolsSearchDir extension property is /dev/dsk. The ZpoolsSearchDir extension property is similar to the -d option of the zpool command. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWscu | +-----------------------------+-----------------------------+ SEE ALSO
rt_reg(4), attributes(5) WARNINGS
Make data service resources within a given resource group dependent on a SUNW.HAStoragePlus resource. Otherwise, no synchronization is pos- sible between the data services and the global devices or file systems. Strong resource dependencies ensure that the SUNW.HAStoragePlus resource is brought online before other resources. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted only when the resource is brought online. Enable logging on UFS systems. Avoid configuring multiple SUNW.HAStoragePlus resources in different resource groups that refer to the same device group and with Affini- tyOn flags set to TRUE. Redundant device switchovers can occur. As a result, resource and device groups might be dislocated. Avoid configuring a ZFS storage pool under multiple SUNW.HAStoragePlus resources in different resource groups. NOTES
The SUNW.HAStoragePlus resource is capable of mounting any cluster file system that is found in an unmounted state. All file systems are mounted in the overlay mode. Local file systems are forcibly unmounted. The waiting time for all device services and file systems to become available is specified by the Prenet_Start_Timeout property in SUNW.HAStoragePlus. This is a tunable property. SunOS 5.9 25 Sep 2007 SUNW.HAStoragePlus(5)
All times are GMT -4. The time now is 05:16 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy