02-02-2008
Quote:
Originally Posted by
itik
Is this going to have data integrity problem considering SAN is much faster than SSA? And my system is only 4.3.3 with JFS only.
Use the "mirror write consistency" option if you want to be 101% sure, but: no, there are no problems with data integrity because of unequal disk speeds.
Quote:
When detaching the SSA disk, the command is reducevg hdisk14. Is that correct?
yes, that is correct. After that use "rmdev -dl <disk>" to remove the disk device prior to physically detaching it.
Quote:
And there's no conflict for all types of disks whether raid 0,1,5 to be mirrored.
No, there is not. When the disk is used by the LVM it is turned into a "physical volume" and this is raw storage space, regardless of where it comes from. This is the main difference between a "disk" and a "physical volume": it doesn't matter any more of which the physical volume is constructed.
Hope this helps.
bakunin
10 More Discussions You Might Find Interesting
1. SCO
Forgive me, I do not know much about RAID so I'm going to be
as detailed as possible.
This morning, our server's alarm was going. I found that one
of our drives have failed. (we have 3)
It is an Adaptec ATA RAID 2400A controller
I'm purchasing a new SCSI drive today. My questions:
... (2 Replies)
Discussion started by: gseyforth
2 Replies
2. Solaris
I have one volume raid5 with 3 slice one of them is maintenance state. I replace this slice and resync the volume. when I try to mount the file system another slice goes to last erred. Again resync and the state goes to OK but the slice in mantenance persist. I try to enabled this but persist in... (2 Replies)
Discussion started by: usdsia
2 Replies
3. Filesystems, Disks and Memory
Hello, we have a problem with our system, a machine with a RAID5:
- We can boot the system from CD only, if we try to boot from hard-disk the GRUB seems to be "freezed". What is the difference, why we can boot from CD if something is wrong?
- Allways we retreive an error like: "raid array is... (6 Replies)
Discussion started by: aristegui
6 Replies
4. Solaris
I've looked a little but haven't found a solid answer, assuming there is one.
What's better, hardware mirroring or ZFS mirroring? Common practice for us was to use the raid controllers on the Sun x86 servers. Now we've been using ZFS mirroring since U6. Any performance difference? Any other... (3 Replies)
Discussion started by: Lespaul20
3 Replies
5. Solaris
Hi ,
I am new to SVM .when i try to learn RAID 1 , first they are creating two RAID 0 strips through
metainit d51 1 1 c0t0d0s2
metainit d52 1 1 c1t0d0s2
In the next step
metainit d50 -m d51
d50: Mirror is setup
next step is
metaattach d50 d52
d50 : submirror d52 is... (7 Replies)
Discussion started by: vr_mari
7 Replies
6. UNIX for Advanced & Expert Users
Hi there,
Don't know if my title is relevant but I'm dealing with dangerous materials that I don't really know and I'm very afraid to mess anything up.
I have a Debian 5.0.4 server with 4 x 1TB hard drives.
I have the following mdstat
Personalities :
md1 : active raid1 sda1 sdd1... (3 Replies)
Discussion started by: chebarbudo
3 Replies
7. SuSE
Hi all,
I am currently using opensuse 12.1,
We have Raid 5 array of 8 disks. A friend of mine accidently removed a drive & place it back and also added a new disk to it(making it 9 disks). now the output of mdadm --detail is as shown below
si64:/dev # mdadm --detail /dev/md3
/dev/md3:... (1 Reply)
Discussion started by: patilrajashekar
1 Replies
8. HP-UX
what is the difference between DRD and Root Mirror Disk using LVM mirror ? (3 Replies)
Discussion started by: maxim42
3 Replies
9. Hardware
Hello Experts,
I have few doubts on RAID 5 with LUNs carved as STRIPE and CONCAT
RAID 5 = STRIPE + Parity mirroring
I would like to know if the LUNs carved are CONCATE from RAID 5 disk array. Are the I/Os are spread accross the disks within the RAID 5 Array? And if I do carve STRIPED... (1 Reply)
Discussion started by: sybadm
1 Replies
10. Red Hat
Hi All,
I have a new x3650 M4 server with hardware RAID 5 configured 4 x 300 GB (HDD).
The Raid controller is ServeRAID M5110e.
Im getting "device not found" error during hardisk detection of RHEL6 install using DVD.
Some pages over the net pointed to using ServerGuide media for... (1 Reply)
Discussion started by: Solaris_Begin
1 Replies
LEARN ABOUT REDHAT
mkinitrd
MKINITRD(8) System Manager's Manual MKINITRD(8)
NAME
mkinitrd - creates initial ramdisk images for preloading modules
SYNOPSIS
mkinitrd [--version] [-v] [-f]
[--preload=module] [--omit-scsi-modules]
[--omit-raid-modules] [--omit-lvm-modules]
[--with=module] [--image-version]
[--fstab=fstab] [--nocompress]
[--builtin=module] [--nopivot]
image kernel-version
DESCRIPTION
mkinitrd creates filesystem images which are suitable for use as Linux initial ramdisk (initrd) images. Such images are often used for
preloading the block device modules (such as IDE, SCSI or RAID) which are needed to access the root filesystem. mkinitrd automatically
loads filesystem modules (such as ext3 and jbd), IDE modules, all scsi_hostadapter entries in /etc/modules.conf, and raid modules if the
system's root partition is on raid, which makes it simple to build and use kernels using modular device drivers.
Any module options specified in /etc/modules.conf are passed to the modules as they are loaded by the initial ramdisk.
If the root device is on a loop device (such as /dev/loop0), mkinitrd will build an initrd which sets up the loopback file properly. To do
this, the fstab must contain a comment of the form:
# LOOP0: /dev/hda1 vfat /linux/rootfs
LOOP0 must be the name of the loop device which needs to be configured, in all capital lettes. The parameters after the colon are the
device which contains the filesystem with the loopback image on it, the filesystem which is on the device, and the full path to the loop-
back image. If the filesystem is modular, initrd will automatically add the filesystem's modules to the initrd image.
The root filesystem used by the kernel is specified in the boot configuration file, as always. The traditional root=/dev/hda1 style device
specification is allowed. If a label is used, as in root=LABEL=rootPart the initrd will search all available devices for an ext2 or ext3
filesystem with the appropriate label, and mount that device as the root filesystem.
OPTIONS
--builtin=module
Act as if module is built into the kernel being used. mkinitrd will not look for this module, and will not emit an error if it does
not exist. This option may be used multiple times.
-f Allows mkinitrd to overwrite an existing image file.
--fstab=fstab
Use fstab to automatically determine what type of filesystem the root device is on. Normally, /etc/fstab is used.
--image-version
The kernel version number is appended to the initrd image path before the image is created.
--nocompress
Normally the created initrd image is compressed with gzip. If this option is specified, the compression is skipped.
--nopivot Do not use the pivot_root system call as part of the initrd. This lets mkinitrd build proper images for Linux 2.2 kernels
at the expense of some features. In particular, some filesystems (such as ext3) will not work properly and filesystem options will
not be used to mount root. This option is not recommended, and will be removed in future versions.
--omit-lvm-modules
Do not load any lvm modules, even if /etc/fstab expects them.
--omit-raid-modules
Do not load any raid modules, even if /etc/fstab and /etc/raidtab expect them.
--omit-scsi-modules
Do not load any scsi modules, including 'scsi_mod' and 'sd_mod' modules, even if they are present.
--preload=module
Load the module module in the initial ramdisk image. The module gets loaded before any SCSI modules which are specified in /etc/mod-
ules.conf. This option may be used as many times as necessary.
-v Prints out verbose information while creating the image (normally the mkinitrd runs silently).
--version
Prints the version of mkinitrd that's being used and then exits.
--with=module
Load the modules module in the initial ramdisk image. The module gets loaded after any SCSI modules which are specified in /etc/mod-
ules.conf. This option may be used as many times as necessary.
FILES
/dev/loop* A block loopback device is used to create the image, which makes this script useless on systems without block loopback
support available.
/etc/modules.conf Specified SCSI modules to be loaded and module options to be used.
SEE ALSO
fstab(5), insmod(1), kerneld(8), lilo(8)
AUTHOR
Erik Troan <ewt@redhat.com>
4th Berkeley Distribution Sat Mar 27 1999 MKINITRD(8)