Sponsored Content
Operating Systems Solaris Sun StorageTek Common Array Manager 6.0 works very slowly Post 302363452 by Sapfeer on Tuesday 20th of October 2009 10:19:35 AM
Old 10-20-2009
Sun StorageTek Common Array Manager 6.0 works very slowly

Hi!
I have Sun StorageTek 2540 FC array and CAM works very slowly - I can wait for software response even more than 2 minutes... I run this software on Windows machine with Firefox Web Browser but speed is terrible... How can I make it works at least a little bit faster?..
 

5 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Configure large volume on Sun StorageTek 2540 array

Hi, We have 12x1TB SATA disks in our array and I need to create 10TB volume. I defined new storage profile on array and when I tried to add volume, I faced with ~2TB limit for new volumes. I didn't find how to set another limit on my storage profile. Is there is a way to configure one large... (3 Replies)
Discussion started by: Sapfeer
3 Replies

2. Solaris

Accessing a StorageTek 2530 Disk array from SUN, SPARC Enterprise T2000

Hello, Wondering if anyone can help me with mounting a file share from my Sun T2000 server running Solaris 10 to my connected 2530 disk array? I believe I've connected the disk array correctly and I have created a volume on the array using the filesystem (Sun_SAM-FS, RAID-5). The T2000... (15 Replies)
Discussion started by: DundeeDancer
15 Replies

3. Filesystems, Disks and Memory

Backup Sun StorageTek Common Array Manager's configuration

In Sun manuals, I didn't find how to backup Sun StorageTek Common Array Manager's configuration. Is there a way to do it like backing up Brocade switch configuration? CAM is under Solaris 10. Thank you in advance! (0 Replies)
Discussion started by: aixlover
0 Replies

4. Filesystems, Disks and Memory

SAN questions about Sun StorageTek array

Hi, I have a question about Sun StorageTek Common Array Manager (CAM): What is the concept of 'host'? Is it the hostname of the server that has access to the managed array? If so, can I use its IP instead of its hostname? I've found a 'host' under CAM called XYZ (See below). In our... (7 Replies)
Discussion started by: aixlover
7 Replies

5. Solaris

Common Array Manager

Hi! May be Solaris forum is not the best choice for asking such question, but I really need help with SunStorage FC array. AFAIK this array can be configured only with CAM software by Sun, but sadly all previously free Metalink downloads are now accessible only as a part of paid support (and I... (0 Replies)
Discussion started by: Gleb Erty
0 Replies
scgdevs(1M)						  System Administration Commands					       scgdevs(1M)

NAME
scgdevs - global devices namespace administration script SYNOPSIS
/usr/cluster/bin/scgdevs DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scgdevs command manages the global devices namespace. The global devices namespace is mounted under the /global directory and consists of a set of logical links to physical devices. As the /dev/global directory is visible to each node of the cluster, each physical device is visible across the cluster. This fact means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed from any node in the cluster. The scgdevs command enables you to attach new global devices (for example, tape drives, CD-ROM drives, and disk drives) to the global- devices namespace without requiring a system reboot. You must run the devfsadm command before you run the scgdevs command. Alternatively, you can perform a reconfiguration reboot to rebuild the global namespace and attach new global devices. See the boot(1M) man page for more information about reconfiguration reboots. You must run this command from a node that is a current cluster member. If you run this command from a node that is not a cluster member, the command exits with an error code and leaves the system state unchanged. You can use this command only in the global zone. You need solaris.cluster.system.modify RBAC authorization to use this command. See the rbac(5) man page. You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a spe- cial kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run the su command to assume a role. You can also use the pfexec command to issue privileged Sun Clus- ter commands. EXIT STATUS
The following exit values are returned: 0 The command completed successfully. nonzero An error occurred. Error messages are displayed on the standard output. FILES
/devices Device nodes directory /global/.devices Global devices nodes directory /dev/md/shared Solaris Volume Manager metaset directory ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevice(1CL), boot(1M), devfsadm(1M), su(1M), did(7) Sun Cluster System Administration Guide for Solaris OS NOTES
The scgdevs command, called from the local node, will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean that the command has completed its work clusterwide. This document does not constitute an API. The /global/.devices directory and the /devices directory might not exist or might have different contents or interpretations in a future release. The existence of this notice does not imply that any other documentation that lacks this notice constitutes an API. This interface should be considered an unstable interface. Sun Cluster 3.2 10 Apr 2006 scgdevs(1M)
All times are GMT -4. The time now is 12:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy