Sponsored Content
Full Discussion: A5200 vs A1000
Operating Systems Solaris A5200 vs A1000 Post 302329334 by cy1972 on Friday 26th of June 2009 04:51:44 PM
Old 06-26-2009
Hi,
I've used both in the past and the 5200 is the better of the two options in my opinion. I think the A1000 went EOL a little before the 5200 too.

You can get 12 disks in an A1000 but 22 in a 5200 tray.

The A1000 would require a scsi connection and Raid Manager software which went end of life at version 6.22.1 (I think) and finding a copy may be difficult. The A1000 also isn't a supported option with Solaris 10 where as the 5200 does support Solaris 10 so its at least got some life left in in with the current version of Solaris. Kind in mind that the 5200 will have to be connected via Fibre to the system.

The 5200 has a nice touch screeen display on the front for managment such as spinning disks up and down which is easy to use and I know Sun still use a combination of E220 and A5200's for their ZFS training classes in the uk but you didnt hear that from me Smilie

Best of luck and have fun learning with whichever option you choose.
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

veritas/STOREDGE A1000

Hi, I am a Dba and very new to filesystems and stuff. I think that we have Veritas filesystems on my Sun SOlaris 5.8 box, how do I confirm this: all my filesystems are mounted like this: /dev/vx/dsk... Now we are also using disk arrays (storedge a1000) how do I access them from the system.... (1 Reply)
Discussion started by: knarayan
1 Replies

2. UNIX for Advanced & Expert Users

Raid A1000 with E450 and E250.

Hi, I'm facing problem in connecting a Raid A1000 to E250 and E450. ( Both machines with Solaris 2.6 OS and patched, with different scsi-initiator-ids - 7 & 3 ). BTW, I'm not using this setup to access raid data from both machines simultaneously. There is Veritas Cluster Server monitoring... (1 Reply)
Discussion started by: shibz
1 Replies

3. UNIX for Dummies Questions & Answers

A1000 Battery Question

After searching and finding the link to the A1000 pdf ( http://192.18.99.138/805-7147/805-7147.pdf ) my questions are: 1) I have a battery failure on a A1000. I know that caching is disabled and it reports to see log. After reading the manual I have learned that the battery is a data-cache... (2 Replies)
Discussion started by: finster
2 Replies

4. Solaris

Using Enterprise 4500 with storedge D100 and storedge A5200

Sorry if this seems trivial. i have been given a task at work to configure a sun processing server "enterprise 4500" so it can work with the a storage server "storedge A5200" and "storedge D1000" the truth is i have never worked on a sun server before. i need a guideline (step by step) as to... (5 Replies)
Discussion started by: lawalidowu
5 Replies

5. Solaris

A1000 Solaris 10 on Blade 1000

I have a SUN Blade 1000 running Solaris 10 and an A1000 array. I know that this combo is not supported by SUN but will it work? The raid manager software is installed and it says that the firmware needs to be upgraded -it is at I think level 2.05 something. I can see all the disks in a... (13 Replies)
Discussion started by: tribbles
13 Replies

6. Solaris

How can i connect storedge A1000 to E250box?

Hello Experts, I am using E250 on that solais 10 5/08 installed. I am unable to see disks. I connected 2 disks in that storage of 18gb each. When I run format command it is showing that 2 disks one is operating system and another one is 6MB. I checked probe-scsi and probe-scsi-all at ok... (6 Replies)
Discussion started by: younus_syed
6 Replies

7. Solaris

A1000 Disk storage array

I am new to the unix world. I have SunBlade 100 and A1000 Disk storage array with 12 Hard drives. I used SCSI card and SCSI cables to connect. When I do the format command,I can see disk storage as one disk instead of 12 disks as below. Could anybody can explain why? What should I do in order... (1 Reply)
Discussion started by: Dulasi
1 Replies

8. Solaris

how to create luns in A1000

hi friends how we can create luns in A1OOO storage ..plzhelp its very urgent when ever i am connect to A1000 raid controller through laptop with console cable with the help of hyperterminal..i isued serial port parameters which i mentioned below.. Set serial port parameters to: �... (1 Reply)
Discussion started by: tv.praveenkumar
1 Replies

9. Solaris

Managing disk array on A1000

I would want to ask on what software do i need to configure disk array for StorEdge A1000. (Sun Enterprise 450 - currently installed with SUN Solaris 9.) Is it using RAID Manager 6.22? And is it compatible with SUN Solaris 9 or 10? Thanks in advance for reading or replying my post. (2 Replies)
Discussion started by: beginningDBA
2 Replies
stmsboot(1M)															      stmsboot(1M)

NAME
stmsboot - administration program for the Solaris I/O multipathing feature SYNOPSIS
/usr/sbin/stmsboot [-d | -e | -u | -L | -l controller_number] The Solaris I/O multipathing feature is a multipathing solution for storage devices that is part of the Solaris operating environment. This feature was formerly known as Sun StorEdge Traffic Manager (STMS) or MPxIO. The stmsboot program is an administrative command to manage enumeration of fibre channel devices under Solaris I/O multipathing. Solaris I/O multipathing-enabled devices are enumerated under scsi_vhci(7D), providing multipathing capabilities. Solaris I/O multipathing-disabled devices are enumerated under the physical controller. In the /dev and /devices trees, Solaris I/O multipathing-enabled devices receive new names that indicate that they are under Solaris I/O multipathing control. This means a device will have a different name from its original name (following installation) when it is under Solaris I/O multipathing control. The stmsboot command automatically updates /etc/vfstab and dump configuration to reflect the device names changes when enabling or disabling Solaris I/O multipathing. A reboot is required for changes to take effect. The following options are supported: -e Enables Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports. Following this enabling, you are prompted to reboot. During the reboot, vfstab and the dump configuration will be updated to reflect the device name changes. -d Disables Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports. Following this disabling, you are prompted to reboot. During the reboot, vfstab and the dump configuration will be updated to reflect the device name changes. -u Updates vfstab and the dump configuration after you have manually modified the configuration to have Solaris I/O multipathing enabled or disabled on specific fp(7D) controller ports. This option prompts you to reboot. During the reboot, vfstab and the dump configura- tion will be updated to reflect the device name changes. -L Display the device name changes from non-Solaris I/O multipathing device names to Solaris I/O multipathing device names. -l controller_number Display the device name changes from non-Solaris I/O multipathing device names to Solaris I/O multipathing device names for the speci- fied controller. Along with its primary function of enabling or disabling Solaris I/O multipathing, the stmsboot command is used to update vfstab and the dump configuration to reflect device name changes. For a system to function properly, you must configure the applications that consume the devices by old names to use the new names. The -L and -l options display the mapping between the old and new device names. These options work after the changes made to the Solaris I/O multipathing configuration have taken effect. For example, you can use these options following the reboot after invoking stmsboot -e. The old device names must exist in order to display the mappings. Example 1: Enabling Solaris I/O Multipathing Following OS Upgrade To enable Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports run: # stmsboot -e Example 2: Disabling Solaris I/O Multipathing To disable Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports, run: # stmsboot -d Example 3: Enabling Solaris I/O Multipathing on Selected Ports You want to enable Solaris I/O multipathing on some fibre channel controller ports and disable the feature on the rest. You edit the fp.conf file (see fp(7D)) to enable or disable Solaris I/O multipathing on specific controller ports. You then run the following command to have vfstab and the dump configuration updated to reflect the new device names: # stmsboot -u See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |SPARC | +-----------------------------+-----------------------------+ |Availability |SUNWcsu, SUNWcslr | +-----------------------------+-----------------------------+ |Interface Stability |Obsolete | +-----------------------------+-----------------------------+ SEE ALSO
dumpadm(1M), ufsdump(1M), dumpdates(4), vfstab(4), fcp(7D), fctl(7D), fp(7D), qlc(7D), scsi_vhci(7D) Consult the Sun StorEdge Disk Tray [or Subsystem] Administrator's Guide for the T3, 3910, 3960, 6120, and 6320 storage subsystems. Sun StorEdge Traffic Manager Installation and Configuration Guide Solaris I/O multipathing is not supported on all devices. After enabling Solaris I/O multipathing, only those devices that Solaris I/O mul- tipathing supports are placed under Solaris I/O multipathing control. Non-supported devices remain as before. For Solaris releases prior to the current release, the -e and -d options remove the mpxio-disable property entries from fp.conf file (see fp(7D)) and add a global mpxio-disable entry to fp.conf. The current release of the Solaris operating system does not support the mpxio-disable property. Solaris I/O multipathing is always enabled. If you want to disable multipathing, you must use the mechanisms provided by the HBA drivers. See fp(7D). Enabling Solaris I/O Multipathing on a Sun StorEdge Disk Array The following applies to Sun StoreEdge T3, 3910, 3960, 6120, and 6320 storage subsystems. To place your Sun StorEdge disk subsystem under Solaris I/O multipathing control, in addition to enabling Solaris I/O multipathing, the mp_support of the subsystem must be set to mpxio mode. The preferred sequence is to change the subsystem's mp_support to mpxio mode, then run stmsboot -e. If Solaris I/O multipathing is already enabled but the subsystem's mp_support is not in mpxio mode, then change the mp_support to mpxio mode and run stmsboot -u. Refer to the Sun StorEdge Administrator's Guide for your subsystem for more details. ufsdump Users The ufsdump command keeps records of the filesystem dumps in /etc/dumpdates (see dumpdates(4)). Among other items, the records contain device names. An effect of the "active" stmsboot options (-e, -d, and -u) is to change the device name of a storage device. The stmsboot command does not modify the dumpdates file. Because of this, the dumpdates records will refer to the old device names, that is, the device names that were in effect before you ran stmsboot. The effect of this device name-dumpdates disagreement is that, following use of stms- boot, ufsdump will be processed as if no previous dump had ever been made, thus dumping the entire filesystem (effectively, a level 0 dump). Procedure to Use stmsboot in Sun Cluster Environment If possible, use stmsboot -e before you start installing Sun Cluster software. After you run stmsboot, you install Sun Cluster software as you normally would. If you install Sun Cluster software before running stmsboot, you must use the following procedure. On each machine in the cluster on which you want to enable the Solaris multipathing feature, enter: # stmsboot -e ...and allow the system to reboot. When the system comes up, enter the following two commands: 1. # /usr/cluster/bin/scdidadm -C 2. # /usr/cluster/bin/scdidadm -r The preceding commands update did mappings with new device names while preserving did instance numbers for disks that are connected to multiple cluster nodes. did instance numbers of the local disks might not be preserved. For this reason, the did disk names for local disks might change. 3. Update /etc/vfstab to reflect any new did disk names for your local disks. 4. Reboot the system. To disable the Solaris multipathing feature, use stmsboot -d (instead of stmsboot -e), then follow the procedure above. To view mappings between the old and new device names, run stmsboot -L. To view did device name mappings, run /usr/cluster/bin/scdidadm -L. 3 Mar 2005 stmsboot(1M)
All times are GMT -4. The time now is 01:26 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy