Backup Sun StorageTek Common Array Manager's configuration


 
Thread Tools Search this Thread
Special Forums Hardware Filesystems, Disks and Memory Backup Sun StorageTek Common Array Manager's configuration
# 1  
Old 04-26-2011
Backup Sun StorageTek Common Array Manager's configuration

In Sun manuals, I didn't find how to backup Sun StorageTek Common Array Manager's configuration. Is there a way to do it like backing up Brocade switch configuration? CAM is under Solaris 10.

Thank you in advance!
Login or Register to Ask a Question

Previous Thread | Next Thread

5 More Discussions You Might Find Interesting

1. Solaris

Common Array Manager

Hi! May be Solaris forum is not the best choice for asking such question, but I really need help with SunStorage FC array. AFAIK this array can be configured only with CAM software by Sun, but sadly all previously free Metalink downloads are now accessible only as a part of paid support (and I... (0 Replies)
Discussion started by: Gleb Erty
0 Replies

2. Filesystems, Disks and Memory

SAN questions about Sun StorageTek array

Hi, I have a question about Sun StorageTek Common Array Manager (CAM): What is the concept of 'host'? Is it the hostname of the server that has access to the managed array? If so, can I use its IP instead of its hostname? I've found a 'host' under CAM called XYZ (See below). In our... (7 Replies)
Discussion started by: aixlover
7 Replies

3. Solaris

Sun StorageTek Common Array Manager 6.0 works very slowly

Hi! I have Sun StorageTek 2540 FC array and CAM works very slowly - I can wait for software response even more than 2 minutes... I run this software on Windows machine with Firefox Web Browser but speed is terrible... How can I make it works at least a little bit faster?.. (2 Replies)
Discussion started by: Sapfeer
2 Replies

4. Solaris

Accessing a StorageTek 2530 Disk array from SUN, SPARC Enterprise T2000

Hello, Wondering if anyone can help me with mounting a file share from my Sun T2000 server running Solaris 10 to my connected 2530 disk array? I believe I've connected the disk array correctly and I have created a volume on the array using the filesystem (Sun_SAM-FS, RAID-5). The T2000... (15 Replies)
Discussion started by: DundeeDancer
15 Replies

5. Filesystems, Disks and Memory

Configure large volume on Sun StorageTek 2540 array

Hi, We have 12x1TB SATA disks in our array and I need to create 10TB volume. I defined new storage profile on array and when I tried to add volume, I faced with ~2TB limit for new volumes. I didn't find how to set another limit on my storage profile. Is there is a way to configure one large... (3 Replies)
Discussion started by: Sapfeer
3 Replies
Login or Register to Ask a Question
scconf_dg_svm(1M)					  System Administration Commands					 scconf_dg_svm(1M)

NAME
scconf_dg_svm - change Solaris Volume Manager device group configuration. SYNOPSIS
scconf -c -D [generic_options] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The following information is specific to the scconf command. To use the equivalent object-oriented commands, see the cldevicegroup(1CL) man page. A Solaris Volume Manager device group is defined by a name, the nodes upon which this group can be accessed, a global list of devices in the disk set, and a set of properties used to control actions such as potential primary preference and failback behavior. For Solaris Volume Manager device groups, only one disk set can be assigned to a device group, and the group name must always match the name of the disk set itself. In Solaris Volume Manager, a multihosted or shared device is a grouping of two or more hosts and disk drives that are accessible by all hosts, and that have the same device names on all hosts. This identical device naming requirement is achieved by using the raw disk devices to form the disk set. The device ID pseudo driver (DID) allows multihosted devices to have consistent names across the cluster. Only hosts already configured as part of a disk set itself can be configured into the nodelist of a Solaris Volume Manager device group. At the time drives are added to a shared disk set, they must not belong to any other shared disk set. The Solaris Volume Manager metaset command creates the disk set, which also initially creates and registers it as a Solaris Volume Manager device group. Next, you must use the scconf command to set the node preference list, the preferenced, failback and numsecondaries subop- tions. If you want to change the order of node preference list or the failback mode, you must specify all the nodes that currently exist in the device group in the nodelist. In addition, if you are changing the order of node preference, you must also set the preferenced suboption to true. If you do not specify the preferenced suboption with the "change" form of the command, the already established true or false setting is used. You cannot use the scconf command to remove the Solaris Volume Manager device group from the cluster configuration. Use the Solaris Volume Manager metaset command instead. You remove a device group by removing the Solaris Volume Manager disk set. OPTIONS
See scconf(1M) for the list of supported generic options. See metaset(1M) for the list of metaset related commands to create and remove disk sets and device groups. Only one action option is allowed in the command. The following action options are supported. -c Change the ordering of the node preference list, change preference and failback policy, and change the desired number of secondaries. EXAMPLES
Example 1 Creating and Registering a Disk Set The following metaset commands create the disk set disksetand register the disk set as a Solaris Volume Manager device group. Next, the scconf command is used to specify the order of the potential primary nodes for the device group, change the preferenced and fail- back options, and change the desired number of secondaries. host1# metaset -s diskset1 -a -h host1 host2 host1# scconf -c -D name=diskset1,nodelist=host2:host1, preferenced=true,failback=disabled,numsecondaries=1 ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevicegroup(1CL), scconf(1M), metaset(1M) Sun Cluster 3.2 10 Jul 2006 scconf_dg_svm(1M)