Sponsored Content
Full Discussion: Solaris in a SAN BOOT
Operating Systems Solaris Solaris in a SAN BOOT Post 302337931 by incredible on Sunday 26th of July 2009 10:40:42 AM
Old 07-26-2009
I dont know about NetApp. But in a typical Sun Storage, we will create the lun/volumes. Map to host. create a filesystem for the new vol/Lun that is mapped. Create a dir on the OS. mount the new raw device on the mount point.
 

9 More Discussions You Might Find Interesting

1. HP-UX

Need Help for configuring Boot from San

Hi all, I am trying to configure my HPUX host 11.31 IA64 to boot from a LUN for EMC clariion CX3-80 (Flaire PNR 26). I am following the below mentioned steps.. vi /tmp/idf 3 EFI 500MB HPUX 100% HPSP 400MB idisk -f /tmp/idf -w /dev/rdisk/diskxxx insf -e pvcreate -B... (0 Replies)
Discussion started by: barun agarwal
0 Replies

2. Red Hat

rhel-5.2-64bit, boot from SAN

hi all, i have qle 2460 hba card in my server. and i have a lun from promise array from raid 0. I want to boot my OS (rhel5.2-64 bit) lun on a dell t300 server. can some one help me on this please asap.. (0 Replies)
Discussion started by: navadeep
0 Replies

3. UNIX for Advanced & Expert Users

Can AIX 5.3 - 6 Boot From HP EVA 6000 SAN

Hi There, Has anyone had any luck with or know how to get AIX 5+ to boot from a HP EVA 6000 SAN? The servers used here will be P Class Blades My initial searches on this so far did not bring results so I am guessing this may not be possible on HP SAN's but please let me know if I am... (0 Replies)
Discussion started by: fazzasx
0 Replies

4. AIX

p595 LPAR no longer sees SAN boot disk

Hello, we have a wierd and urgent problem, with a few of our p595 LPARs running AIX 5.3. The LPARs ran AIX 5.3 TL 7 and booted off EMC SAN disks, using EMC Powerpath. Every boot we run "pprootdev on" and "pprootdev fix". We can issue "bosboot -a" and we can reboot the machines. Now, on two... (2 Replies)
Discussion started by: rwesterik
2 Replies

5. Solaris

SAN boot solaris 10

I have a solaris 10 box which does not have internal disks it has just a single dual port HBA card. Storage team has assigned 2 LUNs to the system and i can see it from the probe-scsi-all /pci@3,700000/SUNW,emlxs@0,1 Device PortID 10100 WWPN 5006016941e0a08d LUN 0 Disk DGC ... (0 Replies)
Discussion started by: fugitive
0 Replies

6. Solaris

Identify Boot from SAN

How to identify the server is BOOT FROM SAN. Also how one can find from which device it is booted? Thanks Rahul Double post, continued here (0 Replies)
Discussion started by: rahul.kurumkar
0 Replies

7. Red Hat

Identify Boot from SAN

Hi, I have many servers all of these are boot from SAN. Can anybody let me know that how to identify the server is Boot fron SAN and from which device? Thanks Rahul (1 Reply)
Discussion started by: rahul.kurumkar
1 Replies

8. AIX

AIX san boot from EMC2 & NPIV

Hi all , we're implementing a new NPIV infrastructure with AIX san boot on EMC2 VMAX 5876 , for economic reason , they don't wont to install Powerpath multipath sw , but only ODM drivers from EMC2 , this behavior is supported and can work?! in the past we implemented NPIV on DS8000 family... (2 Replies)
Discussion started by: BabylonRocker76
2 Replies

9. Red Hat

Volume group not activated at boot after SAN migration

I have an IBM blade running RHEL 5.4 server, connected to two Hitachi SANs using common fibre cards & Brocade switches. It has two volume groups made from old SAN LUNs. The old SAN needs to be retired so we allocated LUNs from the new SAN, discovered the LUNs as multipath disks (4 paths) and grew... (4 Replies)
Discussion started by: rbatte1
4 Replies
FS_MKMOUNT(1)						       AFS Command Reference						     FS_MKMOUNT(1)

NAME
fs_mkmount - Creates a mount point for a volume SYNOPSIS
fs mkmount -dir <directory> -vol <volume name> [-cell <cell name>] [-rw] [-fast] [-help] fs mk -d <directory> -v <volume name> [-c <cell name>] [-r] [-f] [-h] DESCRIPTION
The fs mkmount command creates a mount point for the volume named by the -vol argument at the location in the AFS file space specified by the -dir argument. The mount point looks like a standard directory element, and serves as the volume's root directory, but is actually a special file system object that refers to an AFS volume. When the Cache Manager first encounters a given mount point during pathname traversal, it contacts the VL Server to learn which file server machines house the indicated volume, then fetches a copy of the volume's root directory from the appropriate file server machine. It is possible, although not recommended, to create more than one mount point to a volume. The Cache Manager can become confused if a volume is mounted in two places along the same path through the filespace. The Cache Manager observes three basic rules as it traverses the AFS filespace and encounters mount points: Rule 1: Access Backup and Read-only Volumes When Specified When the Cache Manager encounters a mount point that specifies a volume with either a ".readonly" or a ".backup" extension, it accesses that type of volume only. If a mount point does not have either a ".backup" or ".readonly" extension, the Cache Manager uses Rules 2 and 3. For example, the Cache Manager never accesses the read/write version of a volume if the mount point names the backup version. If the specified version is inaccessible, the Cache Manager reports an error. Rule 2: Follow the Read-only Path When Possible If a mount point resides in a read-only volume and the volume that it references is replicated, the Cache Manager attempts to access a read-only copy of the volume; if the referenced volume is not replicated, the Cache Manager accesses the read/write copy. The Cache Manager is thus said to prefer a read-only path through the filespace, accessing read-only volumes when they are available. The Cache Manager starts on the read-only path in the first place because it always accesses a read-only copy of the root.afs volume if it exists; the volume is mounted at the root of a cell's AFS filespace (named /afs by convention). That is, if the "root.afs" volume is replicated, the Cache Manager attempts to access a read-only copy of it rather than the read/write copy. This rule then keeps the Cache Manager on a read-only path as long as each successive volume is replicated. The implication is that both the "root.afs" and "root.cell" volumes must be replicated for the Cache Manager to access replicated volumes mounted below them in the AFS filespace. The volumes are conventionally mounted at the /afs and /afs/cellname directories, respectively. Rule 3: Once on a Read/write Path, Stay There If a mount point resides in a read/write volume and the volume name does not have a ".readonly" or a ".backup" extension, the Cache Manager attempts to access only the read/write version of the volume. The access attempt fails with an error if the read/write version is inaccessible, even if a read-only version is accessible. In this situation the Cache Manager is said to be on a read/write path and cannot switch back to the read-only path unless mount point explicitly names a volume with a ".readonly" extension. (Cellular mount points are an important exception to this rule, as explained in the following discussion. There are three types of mount points, each appropriate for a different purpose because of the manner in which the Cache Manager interprets them. o When the Cache Manager crosses a regular mount point, it obeys all three of the mount point traversal rules previously described. To create a regular mount point, include only the required -dir and -vol arguments to the fs mkmount command. o When the Cache Manager crosses a read/write mount point, it attempts to access only the volume version named in the mount point. If the volume name is the base (read/write) form, without a ".readonly" or ".backup" extension, the Cache Manager accesses the read/write version of the volume, even if it is replicated. In other words, the Cache Manager disregards the second mount point traversal rule when crossing a read/write mount point: it switches to the read/write path through the filespace. To create a read/write mount point, include the -rw flag on the fs mkmount command. It is conventional to create only one read/write mount point in a cell's filespace, using it to mount the cell's "root.cell" volume just below the AFS filespace root (by convention, /afs/.cellname). See the OpenAFS Quick Start Guide for instructions and the chapter about volume management in the OpenAFS Administration Guide for further discussion. Creating a read/write mount point for a read-only or backup volume is acceptable, but unnecessary. The first rule of mount point traversal already specifies that the Cache Manager accesses them if the volume name in a regular mount point has a ".readonly" or ".backup" extension. o When the Cache Manager crosses a cellular mount point, it accesses the indicated volume in the specified cell, which is normally a foreign cell. (If the mount point does not name a cell along with the volume, the Cache Manager accesses the volume in the cell where the mount point resides.) The Cache Manager disregards the third mount point traversal rule when crossing a regular cellular mount point: it accesses a read-only version of the volume if it is replicated, even if the volume that houses the mount point is read/write. Switching to the read-only path in this way is designed to avoid imposing undue load on the file server machines in foreign cells. To create a regular cellular mount point, include the -cell argument on the fs mkmount command. It is conventional to create cellular mount points only at the second level in a cell's filespace, using them to mount foreign cells' root.cell volumes just below the AFS filespace root (by convention, at /afs/foreign_cellname). The mount point enables local users to access the foreign cell's filespace, assuming they have the necessary permissions on the ACL of the volume's root directory and that there is an entry for the foreign cell in each local client machine's /etc/openafs/CellServDB file. In the output of the fs lsmount command, the cell name and a colon (":") appear between the initial number sign and the volume name in a regular cellular mount point name. OPTIONS
-dir <directory>+ Names the directory to create as a mount point. The directory must not already exist. Relative pathnames are interpreted with respect to the current working directory. Specify the read/write path to the directory, to avoid the failure that results from attempting to create a new mount point in a read- only volume. By convention, the read/write path is indicated by placing a period before the cell name at the pathname's second level (for example, /afs/.abc.com). For further discussion of the concept of read/write and read-only paths through the filespace, see DESCRIPTION. -vol <volume name> Specifies the name or volume ID number of the volume to mount. If appropriate, add the ".readonly" or ".backup" extension to the name, or specify the appropriate volume ID number. -cell <cell name> Names the cell in which the volume resides (creates a cellular mount point). Provide the fully qualified domain name, or a shortened form that disambiguates it from the other cells listed in the local /etc/openafs/CellServDB file. If this argument is omitted, no cell indicator appears in the mount point. When the Cache Manager interprets it, it assumes that the volume named in the mount point resides in the same cell as the volume that houses the mount point. -rw Creates a read/write mount point. Omit this flag to create a regular mount point. -fast Prevents the Volume Location (VL) Server from checking that the volume has a VLDB entry and printing a warning message if it does not. Whether or not this flag is included, the File Server creates the mount point even when the volume has no VLDB entry. -help Prints the online help for this command. All other valid options are ignored. EXAMPLES
The following command creates a regular mount point, mounting the volume "user.smith" at /afs/abc.com/usr/smith: % cd /afs/abc.com/usr % fs mkmount -dir smith -vol user.smith The following commands create a read/write mount point and a regular mount point for the ABC Corporation cell's "root.cell" volume in that cell's file tree. The second command follows the convention of putting a period at the beginning of the read/write mount point's name. % fs mkmount -dir /afs/abc.com -vol root.cell % fs mkmount -dir /afs/.abc.com -vol root.cell -rw The following command mounts the State University cell's "root.cell" volume in the ABC Corporation cell's file tree, creating a regular cellular mount point called /afs/stateu.edu. When a ABC Corporation Cache Manager encounters this mount point, it crosses into the State University cell on a read-only path. % fs mkmount -dir /afs/stateu.edu -vol root.cell -c stateu.edu PRIVILEGE REQUIRED
The issuer must have the "i" (insert) and "a" (administer) permissions on the ACL of the directory that is to house the mount point. SEE ALSO
CellServDB(5), fs_lsmount(1), fs_rmmount(1) COPYRIGHT
IBM Corporation 2000. <http://www.ibm.com/> All Rights Reserved. This documentation is covered by the IBM Public License Version 1.0. It was converted from HTML to POD by software written by Chas Williams and Russ Allbery, based on work by Alf Wachsmann and Elizabeth Cassell. OpenAFS 2012-03-26 FS_MKMOUNT(1)
All times are GMT -4. The time now is 10:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy