Sponsored Content
Operating Systems AIX /dev/hd9var/ /var what should i do. Post 302148867 by zaxxon on Tuesday 4th of December 2007 01:25:31 AM
Old 12-04-2007
Sorry, didn't have a look in here for some time.
You can see if you have a mirror if you do a "lsvg -l <insertvgnamehere>".

Example:

Code:
root@blabla:/usr/local/doc> lsvg -l rootvg
rootvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
hd5                 boot       1     2     2    closed/syncd  N/A
hd6                 paging     64    128   2    open/syncd    N/A
hd8                 jfs2log    1     2     2    open/syncd    N/A
hd4                 jfs2       2     4     2    open/syncd    /
hd2                 jfs2       25    50    2    open/syncd    /usr
hd9var              jfs2       4     8     2    open/syncd    /var
hd3                 jfs2       4     8     2    open/syncd    /tmp
hd1                 jfs2       2     4     2    open/syncd    /home
hd10opt             jfs2       4     8     2    open/syncd    /opt
lg_dumplv           sysdump    32    32    1    open/syncd    N/A
loglv00             jfslog     1     2     2    open/syncd    N/A
lv00                jfs        2     4     2    open/syncd    /var/adm/csd
lvrepos             jfs2       10    20    2    open/syncd    /repos
paging00            paging     64    128   2    open/syncd    N/A
lg_dumplv2          sysdump    32    32    1    open/syncd    N/A

When a LV is mirrored you have at least twice the number of PP than LPs. In the example above you can see, that all LVs are mirrored but the 2 sysdump devices lg_dumplv and lg_dumplv2.
Increasing a filesystem by blocks, megabytes or something will at least always allocate up to the full size of the next PP. So if you are at the "border" to the next PP and tell the system you need 200 MB more space in the FS, and have 100 MB left in the current PP, it will use it up and get the next full PP too. Let's say you have a size of 256 MB per PP, it would allocate 100 MB of the current PP and all 256 MB of the next PP. Your FS would be larger by 356 MB instead of only 200 MB. You will see more LPs & PPs allocated and if it is a mirrored LV, you will see even more PPs Smilie

But this is nothing to worry about - this is normal.
 

10 More Discussions You Might Find Interesting

1. Solaris

What is /dev/tty /dev/null and /dev/console

Hi, Anyone can help My solaris 8 system has the following /dev/null , /dev/tty and /dev/console All permission are lrwxrwxrwx Can this be change to a non-world write ?? any impact ?? (12 Replies)
Discussion started by: civic2005
12 Replies

2. AIX

/dev/hd9var full

/dev/hd9var 819200 1928 100% 12101 12% /var the filesystem is full my self being new to aix what do i do to create space (1 Reply)
Discussion started by: freeman
1 Replies

3. AIX

hd9var full

i have deleted qf's and df's in my /var/spool/mqueue and my /var still remains at 99% full. when i check users i get this: # fuser -xuc /var /var: 8072c(root) 18404(root) 19420c(root) 23558c(root) 24276(root) 24770(root) 25284c(svcagent) 27102c(svcagent) 30242c(root) ... (3 Replies)
Discussion started by: freeman
3 Replies

4. Solaris

Lun remove, stuck in /dev/dsk and /dev/rdsk

So, we removed a LUN from the SAN and the system is refusing to remove the references to it in the /dev folder. I've done the following: devfsadm -Cv powermt -q luxadm -e offline <drive path> luxadm probe All those commands failed to remove the path. The drive stills shows up as <drive... (13 Replies)
Discussion started by: DustinT
13 Replies

5. AIX

Difference between /dev/hdisk and /dev/rhdisk

Hi, How can i check that i am using RAW devices for storage in my AIX machine... Also after adding a LUN from storage to a aix host, when i check /dev in the host, i can see both rhdisk and hdisk with same number eg: dcback1(root):/dev>ls -lrt | grep disk12 crw------- 1 root ... (4 Replies)
Discussion started by: jibujacob
4 Replies

6. AIX

Problem in /dev/hd1 and /dev/hd9var

Hello AIXians, I can't boot my AIX, it hangs and stops at the code error: 0518 After searching google, I knew the problem is due to problems in File Systems. So the solution is booting from any bootable media, then run these commands in maintenance mode: #fsck -y /dev/hd4 #fsck -y... (3 Replies)
Discussion started by: Mohannad
3 Replies

7. Shell Programming and Scripting

Csh , how to set var value into new var, in short string concatenation

i try to find way to make string concatenation in csh ( sorry this is what i have ) so i found out i can't do : set string_buff = "" foreach line("`cat $source_dir/$f`") $string_buff = string_buff $line end how can i do string concatenation? (1 Reply)
Discussion started by: umen
1 Replies

8. Shell Programming and Scripting

Automating partitioning setup of /dev/sda on /dev/sdc

Objective: To recreate the partitioning setup of /dev/sda on /dev/sdc How would I parse the below information and initialize variables (an array?) that can be used to build sgdisk commands in a script, regardless of the number of partitions? Something along the lines of: sgdisk -n... (12 Replies)
Discussion started by: RogerBaran
12 Replies

9. Red Hat

Changing grub from /dev/sda to /dev/sdb

Hi, Please suggest steps to change grub from /dev/sda to /dev/sdb, (1 Reply)
Discussion started by: manoj.solaris
1 Replies

10. HP-UX

Dev/urandom and dev/random missing in HP-UX

Hi, In our HP-UX B.11.11. I could not find dev/urandom and dev/random Are all pseudo-devices implemented as device drivers, or in need to run /configure some package to install the package to have dev/urandom. Please help (4 Replies)
Discussion started by: rashi
4 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 09:31 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy