09-06-2012
I guess it's hopeless then. I'll just have to wipe and re-use the disks. Thanx for all your help!
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Can sed be used to take a existing record and reverse the order of defined character placement if there is no delimeters?
existing record:
0123456789CO
expected result:
9876543210CO
if there were delimeters I could define the delimeter and each placement would have an id which I... (1 Reply)
Discussion started by: r1500
1 Replies
2. UNIX for Dummies Questions & Answers
Hello, I am aware that our system has two hard drives with raid but i'm not sure as to the type of raid the system uses.
I tried this.
# df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 229376 76272 67% 6748 12% /
/dev/hd2 3080192... (1 Reply)
Discussion started by: h1timmboy
1 Replies
3. UNIX for Dummies Questions & Answers
Hi
Can someone tell me what are the differences between software and hardware raid ?
thx for help. (2 Replies)
Discussion started by: presul
2 Replies
4. Solaris
Dear ALl,
I have a RAID 5 volume which is as below
d120 r 60GB c1t2d0s5 c1t3d0s5 c1t4d0s5 c1t5d0s5
d7 r 99GB c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0
d110 r 99GB c1t2d0s4 c1t3d0s4 c1t4d0s4 c1t5d0s4
d8 r 99GB c1t2d0s1 c1t3d0s1... (2 Replies)
Discussion started by: jegaraman
2 Replies
5. Solaris
Hello All,
I have read enough of texts on Raid 01 and Raid 10 on solaris :wall: . But no-where found a way to create them using SVM. Some one pls tell me how to do or Post some link if that helps.
TIA
Curious
solarister (1 Reply)
Discussion started by: Solarister
1 Replies
6. AIX
Hello,
I have a scsi pci x raid controller card on which I had created a disk array of 3 disks
when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk )
suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies
7. UNIX for Dummies Questions & Answers
Hi Gurus,
Can any one explain me the difference between hardware RAID and s/w RAID.
Thanks in Advance. (1 Reply)
Discussion started by: rama krishna
1 Replies
8. Solaris
Server Model: T5120 with 146G x4 disks.
OS: Solaris 10 - installed on c1t0d0.
Plan to use software raid (veritas volume mgr) on c1t2d0 disk.
After format and label the disk, still not able to detect using vxdiskadm.
Question:
Should I remove the hardware raid on c1t2d0 first?
My... (4 Replies)
Discussion started by: KhawHL
4 Replies
9. Red Hat
Hello,
I want to delete a RAID configuration an old server has.
Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton?
I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
WIPE(1) LAM TOOLS WIPE(1)
NAME
wipe - Shutdown LAM.
SYNTAX
wipe [-bdhv] [-n <#>] [<bhost>]
OPTIONS
-b Assume local and remote shell are the same. This means that only one remote shell invocation is used to each node. If -b is
not used, two remote shell invocations are used to each node.
-d Turn on debugging mode. This implies -v.
-h Print the command help menu.
-v Be verbose.
-n <#> Wipe only the first <#> nodes.
DESCRIPTION
This command has been deprecated in favor of the lamhalt command. wipe should only be necessary if lamhalt fails and is unable to clean up
the LAM run-time environment properly. The wipe tool terminates the LAM software on each of the machines specified in the boot schema,
<bhost>. wipe is the topology tool that terminates LAM on the UNIX(tm) nodes of a multicomputer system. It invokes tkill(1) on each
machine. See tkill(1) for a description of how LAM is terminated on each node.
The <bhost> file is a LAM boot schema written in the host file syntax. CPU counts in the boot schema are ignored by wipe. See bhost(5).
Instead of the command line, a boot schema can be specified in the LAMBHOST environment variable. Otherwise a default file, bhost.def, is
used. LAM searches for <bhost> first in the local directory and then in the installation directory under etc/.
wipe does not quit if a particular remote node cannot be reached or if tkill(1) fails on any node. A message is printed if either of these
failures occur, in which case the user should investigate the cause of failure and, if necessary, terminate LAM by manually executing
tkill(1) on the problem node(s). In extreme cases, the user may have to terminate individual LAM processes with kill(1).
wipe will terminate after a limited number of nodes if the -n option is given. This is mainly intended for use by lamboot(1), which
invokes wipe when a boot does not successfully complete.
EXAMPLES
wipe -v mynodes
Shutdown LAM on the machines described in the boot schema, mynodes. Report about important steps as they are done.
FILES
$LAMHOME/etc/lam-bhost.def default boot schema file
SEE ALSO
recon(1), lamboot(1), tkill(1), bhost(5), lam-helpfile(5)
LAM 6.5.8 November, 2002 WIPE(1)