Sponsored Content
Operating Systems Solaris Move disks to different StorEdge, keeping RAID Post 302345443 by Sun Fire on Wednesday 19th of August 2009 08:59:42 AM
Old 08-19-2009
Quote:
Originally Posted by alexs77
Yep. To actually be able to use the disks for RAID in SE3310, the admin would need to "deinitialize" the disks first.



That sounds kinda awkward - suppose the array breaks (but the disks don't). In that case, the data is lost? Doesn't sound too professional to me... I mean, yes, of course you do backups, sure. But at least I only do one backup per day. If the array breaks shortly before the backup starts, then one day of work is lost.

Hard to imagine that Sun really wants its users to go that route.



What XML file? The RAID is configured and setup from "inside" the SE3310 over the telnet interface.




I mean backup your data, so that if by mistake the RAID array initialized and you lost your data, you will be safe.

I meant backup only before you do this migration to another array. After you do your migration, then you can continue with your normal backups.


As for the configuration, please refer to the sscli utility. It's a command line utility that allow you to take backup of your array configuration.
 

10 More Discussions You Might Find Interesting

1. Solaris

Using Enterprise 4500 with storedge D100 and storedge A5200

Sorry if this seems trivial. i have been given a task at work to configure a sun processing server "enterprise 4500" so it can work with the a storage server "storedge A5200" and "storedge D1000" the truth is i have never worked on a sun server before. i need a guideline (step by step) as to... (5 Replies)
Discussion started by: lawalidowu
5 Replies

2. Solaris

Storedge D1000 sharing disks between two boxes

Hello, I have a d1000, it's connected to two servers, I want both servers to see all disks in the array. i.e. i have 6 disks, 3 in each side, I want both servers to see all 6 disks. It appears to be setup in a split bus mode now, ive looked thru the manuals and have become confused! So,... (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

3. Red Hat

IBM RAID disks

We have a Red Hat linux server running on IBM x445 hardware. There are external disks in an IBM EXP300 disk enclosure. The system is running RAID 5. One of the four IBM disks (73.4 GB 10k FRU 06P5760) has become faulty. The system is still up and running OK because of the RAID. In that same EXP300... (3 Replies)
Discussion started by: pdudley
3 Replies

4. Solaris

Add new disk to Sun StorEdge 3310 RAID

HI guys. Bit of a noob so bear with me. I have 2 new disks I want to add to my StorEdge 3310 but am getting lost in the steps. We have another 3310 (JBOD) that I was able to plug the disks into and they instantly showed up. Did a few minor commands after (drvconfig, devfsadm etc..) and I was... (4 Replies)
Discussion started by: jamie_collins
4 Replies

5. Solaris

Solaris not recognizing RAID 5 disks

I've just installed Sol 10 Update 9 on a Sun 4140 server and have a RAID 1 configuration (2 136 Gb drives) for the OS and have created a RAID 5 array (6 136 GB) drives. When i log into the system I am unable to see the RAID 5 disks at all. I've tried using the devfsadm command but no luck and... (9 Replies)
Discussion started by: goose25
9 Replies

6. Linux

If i don't have raid disks can i shut down dmraid device-mapper?

hello my centOS newly installed system loading dmraid modules on startup I did remove all LVM/raid things from system installation menus and after installation too but dmraid is still there and he says: no raid disks found also I did modprobe -r dm_raid45 and it do remove it but only until... (7 Replies)
Discussion started by: tip78
7 Replies

7. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

8. Solaris

Solaris 10 Installation - Disks missing, and Raid

Hey everyone. First, let me start by saying I'm primarily focused on linux boxes, and just happened to get pulled into building two T5220's. I'm not super educated on sun boxes. Both T5220's have 8 146GB 15k SAS drives. Inside the service processor, I can run SHOW /SYS/HDD{0-7} and they all come... (2 Replies)
Discussion started by: msarro
2 Replies

9. Solaris

Hardware RAID using three disks

Dear All , Pl find the below command , # raidctl -l Controller: 1 Volume:c1t0d0 Disk: 0.0.0 Disk: 0.1.0 Disk: 0.3.0 # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies

10. Shell Programming and Scripting

Parallel move keeping folder structure along with files in it

The below will move all the files in the directory dir to the destination using parallel and create a log, however will not keep them in the directory. I have tried mkdir -p but that does not seem to work or at least I can not seem to get it (as it deletes others files when I use it). What is the... (2 Replies)
Discussion started by: cmccabe
2 Replies
stmsboot(1M)															      stmsboot(1M)

NAME
stmsboot - administration program for the Solaris I/O multipathing feature SYNOPSIS
/usr/sbin/stmsboot [-d | -e | -u | -L | -l controller_number] The Solaris I/O multipathing feature is a multipathing solution for storage devices that is part of the Solaris operating environment. This feature was formerly known as Sun StorEdge Traffic Manager (STMS) or MPxIO. The stmsboot program is an administrative command to manage enumeration of fibre channel devices under Solaris I/O multipathing. Solaris I/O multipathing-enabled devices are enumerated under scsi_vhci(7D), providing multipathing capabilities. Solaris I/O multipathing-disabled devices are enumerated under the physical controller. In the /dev and /devices trees, Solaris I/O multipathing-enabled devices receive new names that indicate that they are under Solaris I/O multipathing control. This means a device will have a different name from its original name (following installation) when it is under Solaris I/O multipathing control. The stmsboot command automatically updates /etc/vfstab and dump configuration to reflect the device names changes when enabling or disabling Solaris I/O multipathing. A reboot is required for changes to take effect. The following options are supported: -e Enables Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports. Following this enabling, you are prompted to reboot. During the reboot, vfstab and the dump configuration will be updated to reflect the device name changes. -d Disables Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports. Following this disabling, you are prompted to reboot. During the reboot, vfstab and the dump configuration will be updated to reflect the device name changes. -u Updates vfstab and the dump configuration after you have manually modified the configuration to have Solaris I/O multipathing enabled or disabled on specific fp(7D) controller ports. This option prompts you to reboot. During the reboot, vfstab and the dump configura- tion will be updated to reflect the device name changes. -L Display the device name changes from non-Solaris I/O multipathing device names to Solaris I/O multipathing device names. -l controller_number Display the device name changes from non-Solaris I/O multipathing device names to Solaris I/O multipathing device names for the speci- fied controller. Along with its primary function of enabling or disabling Solaris I/O multipathing, the stmsboot command is used to update vfstab and the dump configuration to reflect device name changes. For a system to function properly, you must configure the applications that consume the devices by old names to use the new names. The -L and -l options display the mapping between the old and new device names. These options work after the changes made to the Solaris I/O multipathing configuration have taken effect. For example, you can use these options following the reboot after invoking stmsboot -e. The old device names must exist in order to display the mappings. Example 1: Enabling Solaris I/O Multipathing Following OS Upgrade To enable Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports run: # stmsboot -e Example 2: Disabling Solaris I/O Multipathing To disable Solaris I/O multipathing on all fibre channel (fp(7D)) controller ports, run: # stmsboot -d Example 3: Enabling Solaris I/O Multipathing on Selected Ports You want to enable Solaris I/O multipathing on some fibre channel controller ports and disable the feature on the rest. You edit the fp.conf file (see fp(7D)) to enable or disable Solaris I/O multipathing on specific controller ports. You then run the following command to have vfstab and the dump configuration updated to reflect the new device names: # stmsboot -u See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |SPARC | +-----------------------------+-----------------------------+ |Availability |SUNWcsu, SUNWcslr | +-----------------------------+-----------------------------+ |Interface Stability |Obsolete | +-----------------------------+-----------------------------+ SEE ALSO
dumpadm(1M), ufsdump(1M), dumpdates(4), vfstab(4), fcp(7D), fctl(7D), fp(7D), qlc(7D), scsi_vhci(7D) Consult the Sun StorEdge Disk Tray [or Subsystem] Administrator's Guide for the T3, 3910, 3960, 6120, and 6320 storage subsystems. Sun StorEdge Traffic Manager Installation and Configuration Guide Solaris I/O multipathing is not supported on all devices. After enabling Solaris I/O multipathing, only those devices that Solaris I/O mul- tipathing supports are placed under Solaris I/O multipathing control. Non-supported devices remain as before. For Solaris releases prior to the current release, the -e and -d options remove the mpxio-disable property entries from fp.conf file (see fp(7D)) and add a global mpxio-disable entry to fp.conf. The current release of the Solaris operating system does not support the mpxio-disable property. Solaris I/O multipathing is always enabled. If you want to disable multipathing, you must use the mechanisms provided by the HBA drivers. See fp(7D). Enabling Solaris I/O Multipathing on a Sun StorEdge Disk Array The following applies to Sun StoreEdge T3, 3910, 3960, 6120, and 6320 storage subsystems. To place your Sun StorEdge disk subsystem under Solaris I/O multipathing control, in addition to enabling Solaris I/O multipathing, the mp_support of the subsystem must be set to mpxio mode. The preferred sequence is to change the subsystem's mp_support to mpxio mode, then run stmsboot -e. If Solaris I/O multipathing is already enabled but the subsystem's mp_support is not in mpxio mode, then change the mp_support to mpxio mode and run stmsboot -u. Refer to the Sun StorEdge Administrator's Guide for your subsystem for more details. ufsdump Users The ufsdump command keeps records of the filesystem dumps in /etc/dumpdates (see dumpdates(4)). Among other items, the records contain device names. An effect of the "active" stmsboot options (-e, -d, and -u) is to change the device name of a storage device. The stmsboot command does not modify the dumpdates file. Because of this, the dumpdates records will refer to the old device names, that is, the device names that were in effect before you ran stmsboot. The effect of this device name-dumpdates disagreement is that, following use of stms- boot, ufsdump will be processed as if no previous dump had ever been made, thus dumping the entire filesystem (effectively, a level 0 dump). Procedure to Use stmsboot in Sun Cluster Environment If possible, use stmsboot -e before you start installing Sun Cluster software. After you run stmsboot, you install Sun Cluster software as you normally would. If you install Sun Cluster software before running stmsboot, you must use the following procedure. On each machine in the cluster on which you want to enable the Solaris multipathing feature, enter: # stmsboot -e ...and allow the system to reboot. When the system comes up, enter the following two commands: 1. # /usr/cluster/bin/scdidadm -C 2. # /usr/cluster/bin/scdidadm -r The preceding commands update did mappings with new device names while preserving did instance numbers for disks that are connected to multiple cluster nodes. did instance numbers of the local disks might not be preserved. For this reason, the did disk names for local disks might change. 3. Update /etc/vfstab to reflect any new did disk names for your local disks. 4. Reboot the system. To disable the Solaris multipathing feature, use stmsboot -d (instead of stmsboot -e), then follow the procedure above. To view mappings between the old and new device names, run stmsboot -L. To view did device name mappings, run /usr/cluster/bin/scdidadm -L. 3 Mar 2005 stmsboot(1M)
All times are GMT -4. The time now is 12:51 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy