Sponsored Content
Operating Systems Solaris Ldom OS on SAN based zfs volume Post 302335489 by samar on Sunday 19th of July 2009 05:44:42 PM
Old 07-19-2009
Ok then fugitive,
u r saying that mentioned device absolutely free and ready for use.
Try relabel it. Certainly that disk. Check is it available on system level.
Try create fs,mount etc.

One more moment look at obp which devices are available there.does the path of that disk visible on obp.I mean on #ok prompt

good luck
 

10 More Discussions You Might Find Interesting

1. Linux

Howto clone/migrate a volume in the SAN

Dear Srs, I have a Linux server (linux01) booting from SAN with a volume in a Nexsan SATAbeast storage array (san01). The disk/volume has four ext3 partitions, total size is near to 400GB, but only 20-30GB are in use. I need to move this disk/volume to another Nexsan SATAbeast storage array... (0 Replies)
Discussion started by: Santi
0 Replies

2. Red Hat

how to mount SAN volume with its increased size

Hi, We have 200GB SAN volume mounted on Redhat EL 5. which is working fine. As my SAN supports dynamic resizing of volumes, i unmounted the volume and resized the SAN Volume to 300 GB successfully. Then i mounted again but it shows 200GB only but data is intact. Now, my requirement is to let... (3 Replies)
Discussion started by: prvnrk
3 Replies

3. AIX

Volume Groups and the SAN

Hello all. I have a perplexing problem I have an AIX 5.1 system on an EMC SAN. This system had been on a CX400 SAN for several years. The system was migrated to a CX700 just over a week ago. The migration consisted of utilizing on of the HBAs in the system and connecting to both SANs ... (9 Replies)
Discussion started by: mhenryj
9 Replies

4. Solaris

Mount A ZFS volume

Is there any way i can mount a zfs volume using snapshot or some other means ? (2 Replies)
Discussion started by: fugitive
2 Replies

5. Solaris

Grow / expand a ZFS volume

Hi, I need to expand a ZFS volume from 500GB to 800GB. I'd like to ask your help to confirm the following procedure: Can I do it on the fly without bothering the users working on this volume? Thank you in advance! (6 Replies)
Discussion started by: aixlover
6 Replies

6. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

7. Solaris

ZFS LDOM problem on Solaris 10

Apologies if this is the wrong forum.. I have some LDOMs running on a Sparc server. I copied the disk0 file from one chassis over to another, stopped the ldom on the source system and started it on the 2nd one. All fine. Shut it down and flipped back. We then did a fair bit of work on the... (4 Replies)
Discussion started by: tommyq
4 Replies

8. Red Hat

Volume group not activated at boot after SAN migration

I have an IBM blade running RHEL 5.4 server, connected to two Hitachi SANs using common fibre cards & Brocade switches. It has two volume groups made from old SAN LUNs. The old SAN needs to be retired so we allocated LUNs from the new SAN, discovered the LUNs as multipath disks (4 paths) and grew... (4 Replies)
Discussion started by: rbatte1
4 Replies

9. Solaris

Solaris 11.3 - SAN mount QFS or ZFS

Hi all, I'm using Solaris 11.3. HBA port connected to SAN disk 3T. AVAILABLE DISK SELECTIONS: 0. c0t600A0B800033696A0000214B571938F1d0 <SUN-CSM200_R-0760 cyl 44556 alt 2 hd 255 sec 189> /scsi_vhci/ssd@g600a0b800033696a0000214b571938f1 1. c2t3C58620E0C565100d0... (1 Reply)
Discussion started by: manhte1
1 Replies

10. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies
KFS(4)							     Kernel Interfaces Manual							    KFS(4)

NAME
kfs - disk file system SYNOPSIS
disk/kfs [ -rc ] [ -b n ] [ -f file ] [ -n name ] [ -s ] DESCRIPTION
Kfs is a local user-level file server for a Plan 9 terminal with a disk. It maintains a hierarchical Plan 9 file system on the disk and offers 9P (see intro(5)) access to it. Kfs begins by checking the file system for consistency, rebuilding the free list, and placing a file descriptor in /srv/name, where name is the service name (default kfs). If the file system is inconsistent, the user is asked for per- mission to ream (q.v.) the disk. The file system is not checked if it is reamed. The options are b n If the file system is reamed, use n byte blocks. Larger blocks make the file system faster and less space efficient. 1024 and 4096 are good choices. N must be a multiple of 512. c Do not check the file system. f file Use file as the disk. The default is /dev/sd0fs. n name Use kfs.name as the name of the service. r Ream the file system, erasing all of the old data and adding all blocks to the free list. s Post file descriptor zero in /srv/service and read and write protocol messages on file descriptor one. EXAMPLES
Create a file system with service name kfs.local and mount it on /n/kfs. % kfs -rb4096 -nlocal % mount -c /srv/kfs.local /n/kfs FILES
/dev/sd0fs Default file holding blocks. SOURCE
/sys/src/cmd/disk/kfs SEE ALSO
kfscmd(8), mkfs(8), prep(8), wren(3) KFS(4)
All times are GMT -4. The time now is 02:54 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy