Sponsored Content
Full Discussion: New To Solaris
Operating Systems Solaris New To Solaris Post 302503625 by Andyp2704 on Friday 11th of March 2011 06:43:23 AM
Old 03-11-2011
New To Solaris

Hi,

I have recently been told i need to look after a clients Solaris servers (8,9 and 10) and wondered if i could ask the forum for some quick advice on a couple of things. All my previous admin work is on HPUX and have discovered the way the Solaris disk management and file system management is massively different. So i wonder if you might be able to offer some help on a couple of questions.

One of my first tasks is to identify if the servers have got mirrored boot disks. In HPUX i would simply type lvlnboot -v and it would tell me, but i cannot find anywhere a single command in Solaris that would tell me this.

Question1 : Is there a single command i can run that will tell me if the disks are mirrored?

The other think that is confusing me is how the filesystems are set up. I have been looking at the format command as this is the only way i have found that will list the disks and give me an idea if they are internal or in an external array.

While i was looking in the format command i checked out the partition table. On the server i looked at i went into the format command twice, each time choosing a different disk, both of which appear to be internal and i saw these outputs from the print command;

DISK1;
partition> print
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 1895 9.20GB (1895/0/0) 19283520
1 swap wu 1897 - 2350 2.20GB (454/0/0) 4619904
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 2352 - 2366 74.53MB (15/0/0) 152640


DISK2:
partition> print
Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 6751 9.30GB (6751/0/0) 19503639
1 swap wu 6752 - 8421 2.30GB (1670/0/0) 4824630
2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 8422 - 8475 76.17MB (54/0/0) 156006


Question 2: Is there a single command that will tell me if a disk is internal or external instead of using format?

Question 3:
Looking at the above output am i correct in assuming that the 2nd disks is a mirror of the first? It just bothered me why the number of blocks was so different to achieve roughly the same size (9.20Gb)

I am going to try and find some documentation for Solaris 8,9 and 10 that explains how Solaris manages disks and filesystems in the meantime, but any help would be brilliant.

Thank you.
 

4 More Discussions You Might Find Interesting

1. Solaris

application compiled on solaris 10 throwing error when executed on solaris 9

I have compiled my application on Solaris 10 with following description SunOS ldg1 5.10 Generic_138888-03 sun4v sparc SUNW,Sun-Blade-T6320 The compiler is Sun C++ 5.9 SunOS_sparc Patch 124863-01 2007/07/25 But when installing the application on Solaris 9 SunOS odcarch02 5.9... (2 Replies)
Discussion started by: ash_bit2k2
2 Replies

2. Solaris

Unable to login using ssh,telnet onto my solaris machine with solaris 10 installed

Hi, I am unable to login into my terminal hosting Solaris 10 and get the below error message "Server refused to allocate pty ld.so.1: sh: fatal: libc.so.1: open failed: No such file or directory " Is there anyways i can get into my machine and what kind of changes are required to be... (7 Replies)
Discussion started by: sankasu
7 Replies

3. Solaris

root disk mirroring in solaris volume manager for solaris 10

Need a procedure document to do "root disk mirroring in solaris volume manager for solaris 10". I hope some one will help me asap. I need to do it production environment. Let me know if you need any deatils on this. Thanks, Rama (1 Reply)
Discussion started by: ramareddi16
1 Replies

4. Solaris

Patching Procedure in Solaris 10 with sun cluster having Solaris zone

Hi Gurus I am not able to find the patching procedure for solaris 10 ( sol10 u11) to latest patchset with sun cluster having failover zones so that same I should follow. Take an instance, there are sol1 and sol2 nodes and having two failover zones like sozone1-rg and sozone2-rg and currently... (1 Reply)
Discussion started by: nick101
1 Replies
voldiskadd(8)						      System Manager's Manual						     voldiskadd(8)

NAME
voldiskadd - Adds one or more disks for use with the Logical Storage Manager SYNOPSIS
/usr/sbin/voldiskadd disk-address-list DESCRIPTION
The voldiskadd utility sets up new disks that are added to the system after the initial system installation and configures the disks for use by the Logical Storage Manager. A valid disk label must exist on the disk before using the voldiskadd utility. One or more disks may be specified using a disk-address-list. Disk addresses in the list have the form dskn (for the entire disk) or dsknp (for a specific partition). When specifying multiple disks, use a space between entries. Disk address names relate directly to device names in the /dev/disk directory. For example, here are some valid voldiskadd disk-address-list specifications: # voldiskadd dsk1 # voldiskadd dsk2 dsk3a The file, /etc/vol/disks.exclude, may be used to exclude disks from use by voldiskadd. Each line of the file specifies the name of a disk to exclude (for example, dsk5). The voldiskadd utility prompts the user for a disk group name and disk media name for the disks. If a new disk group name is specified, that disk group is created for the new disks. If no disk group name is specified, the disks are left as unassigned replacement disks for future use. If an existing disk group name is specified, the user is prompted for whether the disks should be designated as spares for the disk group. If a disk is found to already contain non-Logical Storage Manager partitioning, the user is asked whether the disk should be encapsulated. Encapsulation turns each partition of the disk into a volume. A disk should be encapsulated if it contains file systems or data that should not be overwritten. If encapsulation is not desired for a disk, the disk can be initialized as a new disk for use by the Logical Storage Manager. For new disks, all space on the disk becomes free space in the disk's disk group. Context-sensitive help is available at every prompt by typing ?. Also, a list option can be used to get information on available target disks for an operation. The voldiskadd utility supports the following general classes of actions: Initializing a disk with reserved regions and partitions. Disk initialization is performed by calling voldisksetup command. Adding a disk to an existing disk group. This operation can be performed independently of the initialization of the disk drive to add a disk's storage space to a disk group's free space pool. The volassist command may subsequently allocate from that free space. The disk can also be added as a hot spare device. Creating new disk groups in which to import new disks. If no disk group exists for importing disks, the option of creating the disk group is offered. Encapsulating disks that have exist- ing contents. This is the default action for disks that do not have a valid, existing Logical Storage Manager private region, but that do have a disk label. Encapsulation is performed by calling volencap. Reconnecting a drive that was temporarily inaccessible. This situation is detected automatically, by noting that the specified drive has a disk ID that matches a disk media record with no currently associated physical disk. After reconnection, any stale plexes referring the disk are reattached, and any stopped volumes referring the disk are restarted. This reattach action is performed by calling the volrecover script. ERRORS
You may receive the following messages when using the voldiskadd command: Initialization of disk device special-device failed. Error: special-device or an overlapping partition is open. This message indicates that the partition you specified or an overlapping partition on the disk is actively in use. The partition could be a mounted UFS or AdvFS filesystem, initialized as an LSM disk or used as a swap device. special-device is marked in use for fstype in the disklabel. If you continue with the operation you can possibly destroy existing data. Would you like to continue?? [y,n,q,?] (default: n) This message indicates that the fstype of a partition or an overlapping partition is set in the disk label. The voldiskadd command prints this message to warn that a disk partition may have valid data which could be destroyed. If you are sure that the disk partition does not have valid data and that the partition can be added to LSM, you can ignore the warning message by entering y at the prompt. The voldiskadd command will proceed to initialize the disk partition and add it to LSM. FILES
A list of disks to exclude from use by voldiskadd. SEE ALSO
disklabel(8), volassist(8), voldisk(8), voldiskadm(8), voldisksetup(8), voldg(8), volintro(8) voldiskadd(8)
All times are GMT -4. The time now is 12:26 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy