Sponsored Content
Operating Systems Solaris HP openview installation place Post 302106904 by 197oo302 on Tuesday 13th of February 2007 06:29:30 AM
Old 02-13-2007
HP openview installation place

I use Sun OS 5.8 and Sun server.

When they come and install everything. I heard they install HP openview NNM

also.

I couldn't find any directory.

Can you find out which one is Openview directory and If it is installed or not?

My mounting status

Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 10080200 4009105 5970293 41% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 7199272 136 7199136 1% /var/run
/dev/dsk/c1t1d0s4 20645791 18806231 1633103 93% /oracle
/dev/dsk/c1t2d0s5 20645791 13964517 6474817 69% /work
swap 7199512 376 7199136 1% /tmp
/dev/dsk/c1t3d0s6 35009161 5135811 29523259 15% /oper1
/dev/dsk/c1t4d0s6 35009161 14362368 20296702 42% /oper2
/dev/did/dsk/d6s4 288509 4525 255134 2% /global/.devices/node@1
/dev/vx/dsk/utldg/oradata1
10321884 2462545 7756121 25% /oradata1
/dev/vx/dsk/utldg/oradata2
10321884 6311003 3907663 62% /oradata2
/dev/vx/dsk/utldg/oradata4
10321884 734629 9484037 8% /oradata3
/dev/vx/dsk/utldg/absdata1
51609487 5173737 45919656 11% /absdata1
/dev/vx/dsk/utldg/absdata2
51609487 41031377 10062016 81% /absdata2
/dev/vx/dsk/utldg/absdata3
40254451 5162473 34689434 13% /absdata3
/dev/vx/dsk/utldg/absdata4
51609487 33419515 17673878 66% /absdata4
/dev/vx/dsk/utldg/absdata5
51609487 51217 51042176 1% /absdata5
/dev/vx/dsk/utldg/absdata6
51609487 51217 51042176 1% /absdata6
/dev/vx/dsk/utldg/absdata7
51609487 4486385 46607008 9% /absdata7
/dev/vx/dsk/utldg/absindex1
30965686 20520817 10135213 67% /absindex1
/dev/vx/dsk/utldg/absindex2
30965686 30737 30625293 1% /absindex2
/dev/vx/dsk/utldg/absindex3
30965686 30737 30625293 1% /absindex3
/dev/vx/dsk/utldg/orabackup
10321884 2280316 7938350 23% /orabackup
/dev/vx/dsk/utldg/operdata1
51609487 25867426 25225967 51% /operdata1
/dev/vx/dsk/utldg/operdata2
51609487 26587367 24506026 53% /operdata2
/dev/vx/dsk/utldg/operdata3
51609487 15057215 36036178 30% /operdata3
/dev/did/dsk/d34s4 288509 4430 255229 2% /global/.devices/node@2
nepalabs1:/>
 

5 More Discussions You Might Find Interesting

1. Solaris

installation of Solaris: installation bypasses network config.

hello solaris friends, I've tried installing Sun Solaris 10.0, but everytime it seems to bypass the network config. screen that looks similar to this...here's the url: http://www.hup.hu/old/images/hup/Solaris/Sol10beta7/9.png I'm able to install it all the way through but I get no... (2 Replies)
Discussion started by: cadmiumgreen
2 Replies

2. Solaris

When does LK actually take place?

I appologize in advance, my question would be easly for me to research if I actually had a Solaris server that I had access to. And I am googled out. I am auditing a solaris 9.0 server's passwords (I was sent the passwd and shadow files) and need to know when the *LK*actually happens. In... (0 Replies)
Discussion started by: lerdahl
0 Replies

3. Solaris

HP openview console.

can anyone tell me the merits and environments of HP openview console. Also let me know what are the uses and advantages compared to other monitoring tools. (2 Replies)
Discussion started by: rogerben
2 Replies

4. Solaris

What is the best way to copy data from place to another place?

Dear Gurus, I need you to advice or suggestion about the best solution to copy data around 200-300G from serverA(location A) to serverB(location B). Normally, I will share folder and then copy but it takes too long time(about 2 days). Do you have any suggestion or which way should be... (9 Replies)
Discussion started by: unitipon
9 Replies

5. Solaris

Solaris 10 flash installation - fatal error. Solaris installation program exited.

Not very helpful to say the least. Seems to read the flar file and go through the upgrade and then come up with this error. Any ideas? (1 Reply)
Discussion started by: psychocandy
1 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 05:00 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy