Sponsored Content
Full Discussion: RAID in Hpux
Operating Systems HP-UX RAID in Hpux Post 302841681 by rveri on Wednesday 7th of August 2013 07:05:53 PM
Old 08-07-2013
smazshah,

- What is the model of your server. and OS version.
$ model ; uname -a

-How you have connected the disk, are they connected in the front bay or are this SAN disks.

In first output it shows CLAIMED, c10t5d0 c10t4d0 that is good, however if the disks are not in CLAIMED state you can not use them for the task.
- After you have presented the disks, make sure you have executed :
# ioscan ; ioscan -fnC disk
# insf -e -C disk


- Then check their condition:
# ioscan -fnH 0/4/1/0.0.0.4.0 ; ioscan -fnH 0/4/1/0.0.0.5.0
# diskinfo /dev/rdsk/c10t5d0 ; diskinfo /dev/rdsk/c10t4d0


- Once above looks good , you can go ahead with RAID-1 mirroring.
This is software mirror, you should have "MirrorDisk/UX " bundle installed with license.
if you have MCOE os , then it ll be bundled with the OS.
To check what type of OE you have you can use # swlist -l bundle | grep -i HPUX11


- If MirrorDisk/UX is ok, you can go ahead and do the mirroring of the disk RAID1.

Few steps outlined for having RAID1 filesystems on the two disk:
1. pvcreate both the disks. # pvcreate -f /dev/rdsk/c10t4d0
2. Create a volume group say vg01 , add the 2nd disk also. with vgcreate , vgextend
3. Create a Filesystem, that you want redundant. (RAID 1 ). with lvcreate and newfs
4. Proceed with mirror: Use lvextend -m 1 <lv_path> <2nd_disk>
5. check with lvdisplay -v <lv_name> , you will see 2 PE. in the LV. That means it is mirrored.

6. If one disk fails, your filesystem & its data, that was created per step 3, will be intact.
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

HPUX 10.20 and RAID ARENA EX3 shutdown problem

Hello, I am having a Problem with an Arena EX3 Raid on an HPUX 10.20 Workstation. Everything is working fine until I shutdown the Server and power off the Raid. Then I get a filesystem corruption on the Raid. This happens everytime. The Raid is configured as Raid Level 1. I would really... (1 Reply)
Discussion started by: nleuenbe
1 Replies

2. Shell Programming and Scripting

Need Script to Use CPUs on a HPUX server to simulate Workload Manager on HPUX.

I am running HPUX and using WLM (workload manager). I want to write a script to fork CPUs to basically take CPUs from other servers to show that the communication is working and CPU licensing is working. Basically, I want to build a script that will use up CPU on a server. Any ideas? (2 Replies)
Discussion started by: cpolikowsky
2 Replies

3. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

4. Solaris

implementing RAID 1 from RAID 5

Dear ALl, I have a RAID 5 volume which is as below d120 r 60GB c1t2d0s5 c1t3d0s5 c1t4d0s5 c1t5d0s5 d7 r 99GB c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0 d110 r 99GB c1t2d0s4 c1t3d0s4 c1t4d0s4 c1t5d0s4 d8 r 99GB c1t2d0s1 c1t3d0s1... (2 Replies)
Discussion started by: jegaraman
2 Replies

5. Solaris

Creation of Raid 01 and Raid 10

Hello All, I have read enough of texts on Raid 01 and Raid 10 on solaris :wall: . But no-where found a way to create them using SVM. Some one pls tell me how to do or Post some link if that helps. TIA Curious solarister (1 Reply)
Discussion started by: Solarister
1 Replies

6. HP-UX

pwage-hpux-T for Trusted HPUX servers

I'm sharing this in case anybody needs it. Modified from the original solaris pwage script. This modified hpux script will check /etc/password file on hpux trusted systems search /tcb and grep the required u_succhg field. Calculate days to expiry and notify users via email. original solaris... (2 Replies)
Discussion started by: sparcguy
2 Replies

7. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

8. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

9. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
CCDCONFIG(8)						    BSD System Manager's Manual 					      CCDCONFIG(8)

NAME
ccdconfig -- configuration utility for the concatenated disk driver SYNOPSIS
ccdconfig [-cv] ccd ileave [flags] dev ... ccdconfig -C [-v] [-f config_file] ccdconfig -u [-v] ccd ... ccdconfig -U [-v] [-f config_file] ccdconfig -g [ccd ...] DESCRIPTION
The ccdconfig utility is used to dynamically configure and unconfigure concatenated disk devices, or ccds. For more information about the ccd, see ccd(4). The options are as follows: -c Configure a ccd. This is the default behavior of ccdconfig. -C Configure all ccd devices listed in the ccd configuration file. -f config_file When configuring or unconfiguring all devices, read the file config_file instead of the default /etc/ccd.conf. -g Dump the current ccd configuration in a format suitable for use as the ccd configuration file. If no arguments are specified, every configured ccd is dumped. Otherwise, the configuration of each listed ccd is dumped. -u Unconfigure a ccd. -U Unconfigure all ccd devices listed the ccd configuration file. -v Cause ccdconfig to be verbose. A ccd is described on the command line and in the ccd configuration file by the name of the ccd, the interleave factor, the ccd configuration flags, and a list of one or more devices. The flags may be represented as a decimal number, a hexadecimal number, a comma-separated list of strings, or the word ``none''. The flags are as follows: CCDF_UNIFORM 0x02 Use uniform interleave CCDF_MIRROR 0x04 Support mirroring CCDF_NO_OFFSET 0x08 Do not use an offset CCDF_LINUX 0x0A Linux md(4) compatibility The format in the configuration file appears exactly as if it were entered on the command line. Note that on the command line and in the configuration file, the flags argument is optional. # # /etc/ccd.conf # Configuration file for concatenated disk devices # # ccd ileave flags component devices ccd0 16 none /dev/da2s1 /dev/da3s1 The component devices need to name partitions of type FS_BSDFFS (or ``4.2BSD'' as shown by disklabel(8)). If you want to use the Linux md(4) compatibility mode, please be sure to read the notes in ccd(4). FILES
/etc/ccd.conf default ccd configuration file EXAMPLES
A number of ccdconfig examples are shown below. The arguments passed to ccdconfig are exactly the same as you might place in the /etc/ccd.conf configuration file. The first example creates a 4-disk stripe out of four scsi disk partitions. The stripe uses a 64 sector interleave. The second example is an example of a complex stripe/mirror combination. It reads as a two disk stripe of da4 and da5 which is mirrored to a two disk stripe of da6 and da7. The last example is a simple mirror. The 2nd slice of /dev/da8 is mirrored with the 3rd slice of /dev/da9 and assigned to ccd0. # ccdconfig ccd0 64 none /dev/da0s1 /dev/da1s1 /dev/da2s1 /dev/da3s1 # ccdconfig ccd0 128 CCDF_MIRROR /dev/da4 /dev/da5 /dev/da6 /dev/da7 # ccdconfig ccd0 128 CCDF_MIRROR /dev/da8s2 /dev/da9s3 The following are matching commands in Linux and FreeBSD to create a RAID-0 in Linux and read it from FreeBSD. # Create a RAID-0 on Linux: mdadm --create --chunk=32 --level=0 --raid-devices=2 /dev/md0 /dev/hda1 /dev/hdb1 # Make the RAID-0 just created available on FreeBSD: ccdconfig -c /dev/ccd0 32 linux /dev/ada0s1 /dev/ada0s2 When you create a new ccd disk you generally want to fdisk(8) and disklabel(8) it before doing anything else. Once you create the initial label you can edit it, adding additional partitions. The label itself takes up the first 16 sectors of the ccd disk. If all you are doing is creating file systems with newfs, you do not have to worry about this as newfs will skip the label area. However, if you intend to dd(1) to or from a ccd partition it is usually a good idea to construct the partition such that it does not overlap the label area. For example, if you have A ccd disk with 10000 sectors you might create a 'd' partition with offset 16 and size 9984. # disklabel ccd0 > /tmp/disklabel.ccd0 # disklabel -Rr ccd0 /tmp/disklabel.ccd0 # disklabel -e ccd0 The disklabeling of a ccd disk is usually a one-time affair. If you reboot the machine and reconfigure the ccd disk, the disklabel you had created before will still be there and not require reinitialization. Beware that changing any ccd parameters: interleave, flags, or the device list making up the ccd disk, will usually destroy any prior data on that ccd disk. If this occurs it is usually a good idea to reini- tialize the label before [re]constructing your ccd disk. RECOVERY
An error on a ccd disk is usually unrecoverable unless you are using the mirroring option. But mirroring has its own perils: It assumes that both copies of the data at any given sector are the same. This holds true until a write error occurs or until you replace either side of the mirror. This is a poor-man's mirroring implementation. It works well enough that if you begin to get disk errors you should be able to backup the ccd disk, replace the broken hardware, and then regenerate the ccd disk. If you need more than this you should look into external hardware RAID SCSI boxes, RAID controllers (see GENERIC), or software RAID systems such as geom(8) and gvinum(8). SEE ALSO
dd(1), ccd(4), disklabel(8), fdisk(8), gvinum(8), rc(8) HISTORY
The ccdconfig utility first appeared in NetBSD 1.0A. BUGS
The initial disklabel returned by ccd(4) specifies only 3 partitions. One needs to change the number of partitions to 8 using ``disklabel -e'' to get the usual BSD expectations. BSD
October 1, 2013 BSD
All times are GMT -4. The time now is 08:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy