Sponsored Content
Full Discussion: RAID in Hpux
Operating Systems HP-UX RAID in Hpux Post 302841681 by rveri on Wednesday 7th of August 2013 07:05:53 PM
Old 08-07-2013
smazshah,

- What is the model of your server. and OS version.
$ model ; uname -a

-How you have connected the disk, are they connected in the front bay or are this SAN disks.

In first output it shows CLAIMED, c10t5d0 c10t4d0 that is good, however if the disks are not in CLAIMED state you can not use them for the task.
- After you have presented the disks, make sure you have executed :
# ioscan ; ioscan -fnC disk
# insf -e -C disk


- Then check their condition:
# ioscan -fnH 0/4/1/0.0.0.4.0 ; ioscan -fnH 0/4/1/0.0.0.5.0
# diskinfo /dev/rdsk/c10t5d0 ; diskinfo /dev/rdsk/c10t4d0


- Once above looks good , you can go ahead with RAID-1 mirroring.
This is software mirror, you should have "MirrorDisk/UX " bundle installed with license.
if you have MCOE os , then it ll be bundled with the OS.
To check what type of OE you have you can use # swlist -l bundle | grep -i HPUX11


- If MirrorDisk/UX is ok, you can go ahead and do the mirroring of the disk RAID1.

Few steps outlined for having RAID1 filesystems on the two disk:
1. pvcreate both the disks. # pvcreate -f /dev/rdsk/c10t4d0
2. Create a volume group say vg01 , add the 2nd disk also. with vgcreate , vgextend
3. Create a Filesystem, that you want redundant. (RAID 1 ). with lvcreate and newfs
4. Proceed with mirror: Use lvextend -m 1 <lv_path> <2nd_disk>
5. check with lvdisplay -v <lv_name> , you will see 2 PE. in the LV. That means it is mirrored.

6. If one disk fails, your filesystem & its data, that was created per step 3, will be intact.
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

HPUX 10.20 and RAID ARENA EX3 shutdown problem

Hello, I am having a Problem with an Arena EX3 Raid on an HPUX 10.20 Workstation. Everything is working fine until I shutdown the Server and power off the Raid. Then I get a filesystem corruption on the Raid. This happens everytime. The Raid is configured as Raid Level 1. I would really... (1 Reply)
Discussion started by: nleuenbe
1 Replies

2. Shell Programming and Scripting

Need Script to Use CPUs on a HPUX server to simulate Workload Manager on HPUX.

I am running HPUX and using WLM (workload manager). I want to write a script to fork CPUs to basically take CPUs from other servers to show that the communication is working and CPU licensing is working. Basically, I want to build a script that will use up CPU on a server. Any ideas? (2 Replies)
Discussion started by: cpolikowsky
2 Replies

3. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

4. Solaris

implementing RAID 1 from RAID 5

Dear ALl, I have a RAID 5 volume which is as below d120 r 60GB c1t2d0s5 c1t3d0s5 c1t4d0s5 c1t5d0s5 d7 r 99GB c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0 d110 r 99GB c1t2d0s4 c1t3d0s4 c1t4d0s4 c1t5d0s4 d8 r 99GB c1t2d0s1 c1t3d0s1... (2 Replies)
Discussion started by: jegaraman
2 Replies

5. Solaris

Creation of Raid 01 and Raid 10

Hello All, I have read enough of texts on Raid 01 and Raid 10 on solaris :wall: . But no-where found a way to create them using SVM. Some one pls tell me how to do or Post some link if that helps. TIA Curious solarister (1 Reply)
Discussion started by: Solarister
1 Replies

6. HP-UX

pwage-hpux-T for Trusted HPUX servers

I'm sharing this in case anybody needs it. Modified from the original solaris pwage script. This modified hpux script will check /etc/password file on hpux trusted systems search /tcb and grep the required u_succhg field. Calculate days to expiry and notify users via email. original solaris... (2 Replies)
Discussion started by: sparcguy
2 Replies

7. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

8. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

9. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated Disk driver SYNOPSIS
device ccd DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you are familiar with how to generate kernels, how to properly configure disks and devices in a kernel configura- tion file, and how to partition disks. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: device ccd # concatenated disk devices As of the FreeBSD 3.0 release, you do not need to configure your kernel with ccd but may instead use it as a kernel loadable module. Simply running ccdconfig(8) will load the module into the kernel. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. Note that mirroring may not be used with an interleave factor of 0. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. The Interleave Factor If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase sequential read/write performance. The interleave factor is expressed in units of DEV_BSIZE (usually 512 bytes). For large writes, the optimum interleave factor is typically the size of a track, while for large reads, it is about a quarter of a track. (Note that this changes greatly depending on the number and speed of disks.) For instance, with eight 7,200 RPM drives on two Fast-Wide SCSI buses, this translates to about 128 for writes and 32 for reads. A larger interleave tends to work better when the disk is taking a multitasking load by localizing the file I/O from any given process onto a single disk. You lose sequential performance when you do this, but sequential performance is not usually an issue with a multitasking load. An interleave factor must be specified when using a mirroring configuration, even when you have only two disks (i.e., the layout winds up being the same no matter what the interleave factor). The interleave factor will determine how I/O is broken up, however, and a value 128 or greater is recommended. ccd has an option for a parity disk, but does not currently implement it. The best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. For random-access oriented workloads, such as news servers, a larger interleave factor (e.g., 65,536) is more desirable. Note that there is not much ccd can do to speed up applications that are seek-time limited. Larger interleave factors will at least reduce the chance of having to seek two disk-heads to read one directory or a file. Disk Mirroring You can configure the ccd to ``mirror'' any even number of disks. See ccdconfig(8) for how to specify the necessary flags. For example, if you have a ccd configuration specifying four disks, the first two disks will be mirrored with the second two disks. A write will be run to both sides of the mirror. A read will be run to either side of the mirror depending on what the driver believes to be most optimal. If the read fails, the driver will automatically attempt to read the same sector from the other side of the mirror. Currently ccd uses a dual seek zone model to optimize reads for a multi-tasking load rather than a sequential load. In an event of a disk failure, you can use dd(1) to recover the failed disk. Note that a one-disk ccd is not the same as the original partition. In particular, this means if you have a file system on a two-disk mir- rored ccd and one of the disks fail, you cannot mount and use the remaining partition as itself; you have to configure it as a one-disk ccd. You cannot replace a disk in a mirrored ccd partition without first backing up the partition, then replacing the disk, then restoring the partition. Linux Compatibility The Linux compatibility mode does not try to read the label that Linux' md(4) driver leaves on the raw devices. You will have to give the order of devices and the interleave factor on your own. When in Linux compatibility mode, ccd will convert the interleave factor from Linux terminology. That means you give the same interleave factor that you gave as chunk size in Linux. If you have a Linux md(4) device in ``legacy'' mode, do not use the CCDF_LINUX flag in ccdconfig(8). Use the CCDF_NO_OFFSET flag instead. In that case you have to convert the interleave factor on your own, usually it is Linux' chunk size multiplied by two. Using a Linux RAID this way is potentially dangerous and can destroy the data in there. Since FreeBSD does not read the label used by Linux, changes in Linux might invalidate the compatibility layer. However, using this is reasonably safe if you test the compatibility before mounting a RAID read-write for the first time. Just using ccdconfig(8) without mounting does not write anything to the Linux RAID. Then you do a fsck.ext2fs (ports/sysutils/e2fsprogs) on the ccd device using the -n flag. You can mount the file system read-only to check files in there. If all this works, it is unlikely that there is a problem with ccd. Keep in mind that even when the Linux compatibility mode in ccd is working correctly, bugs in FreeBSD's ex2fs implemen- tation would still destroy your data. WARNINGS
If just one (or more) of the disks in a ccd fails, the entire file system will be lost unless you are mirroring the disks. If one of the disks in a mirror is lost, you should still be able to back up your data. If a write error occurs, however, data read from that sector may be non-deterministic. It may return the data prior to the write or it may return the data that was written. When a write error occurs, you should recover and regenerate the data as soon as possible. Changing the interleave or other parameters for a ccd disk usually destroys whatever data previously existed on that disk. FILES
/dev/ccd* ccd device special files SEE ALSO
dd(1), ccdconfig(8), config(8), disklabel(8), fsck(8), gvinum(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
August 9, 1995 BSD
All times are GMT -4. The time now is 06:43 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy