Sponsored Content
Operating Systems Solaris Convert from raw disk to solaris volume manager disk Post 302625691 by fretagi on Wednesday 18th of April 2012 06:22:04 AM
Old 04-18-2012
Convert from raw disk to solaris volume manager disk

I have a solaris 10 system configured using NetApp as its storage, and the file systems are already configured as you can see from the example below:
Code:
root@moneta # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0         9.8G   513M   9.3G     6%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    21G   1.7M    21G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/dev/md/dsk/d6         9.8G   4.0G   5.7G    42%    /usr
fd                       0K     0K     0K     0%    /dev/fd
/dev/md/dsk/d1         9.8G   3.5G   6.2G    36%    /var
swap                    21G   208K    21G     1%    /tmp
swap                    21G    96K    21G     1%    /var/run
/dev/dsk/c4t60A98000646F6172636F677178564852d0s0
                       197G   100G    95G    52%    /ora_moneta_oraarch
/dev/md/dsk/d30        550G    52G   493G    10%    /local_backup
/dev/dsk/c4t60A98000646F6172636F677231526977d0s0
                        43G   753M    42G     2%    /moneta_polled01
/dev/dsk/c4t60A98000646F6172636F677234564655d0s0
                        64G    13G    51G    21%    /moneta_parsed01
/dev/dsk/c4t60A98000646F6172636F67724B4C2D6Dd0s6
                       689G   548G   134G    81%    /moneta_collected02
/dev/dsk/c4t60A98000646F6172636F677231507347d0s0
                        64G    53G    10G    84%    /moneta_temp01
/dev/md/dsk/d5         9.8G    69M   9.7G     1%    /opt
/dev/dsk/c4t60A98000646F6172636F677253506852d0s0
                       584G   513G    65G    89%    /ora_data01
/dev/md/dsk/d4         192G   151G    39G    80%    /internaldisk1
/dev/dsk/c4t60A98000646F6172636F67717542506Ad0s0
                       5.9G   2.4G   3.5G    41%    /moneta_home
/dev/dsk/c4t60A98000646F6172636F67724D6B7548d0s6
                       591G   462G   122G    80%    /moneta_collected03
/dev/dsk/c4t60A98000646F6172636F677255764435d0s0
                       583G   505G    72G    88%    /ora_data04
/dev/dsk/c4t60A98000646F6172636F677176744B35d0s0
                        20G   9.8G   9.7G    51%    /oracle
/dev/dsk/c4t60A98000646F6172636F6772557A4931d0s0
                       584G   522G    56G    91%    /ora_data03
/dev/dsk/c4t60A98000646F6172636F6772554A706Dd0s0
                       584G   521G    57G    91%    /ora_data02

As you can some of these file systems have exceed the 90% mark, so we want to extend them. From my research I need to convert the
Code:
/dev/dsk/c4t60A98000646F6172636F6772554A706Dd0s0

into “/dev/dsk/md/d60” as an example, which is “solaris volume manager” nomenclature. Then I can use growfs -m to resize the partition.
But I need first to convert those disks into solaris volume manager disks. Please can you give me the steps to do that?
FR
 

10 More Discussions You Might Find Interesting

1. Solaris

Disk Mirror in Solaris 9 via Solaris Volume Manager

Hello, I am trying to do mirror in solaris 9. I have total 0-7 disks 4 5 6 7 0 1 2 3 Drive 0 and Drive 4 = Boot Drives Need to Mirror following drives. Drive 1 and Drive 5 = Need to mirror Drive 1 was mounted on: /prod1, /prod2, /prod3, /prod4, /prod5. Then i... (3 Replies)
Discussion started by: deal732
3 Replies

2. Filesystems, Disks and Memory

Volume Manager; importing a disk

System: Alpha with Tru64 5.1b Disk under LSM (Logical Storage Manager; essentially v2 of Veritas VxVM) control was generating disk errors. The disk was timing out a lot and generating a few disk errors. DBAs couldn't keep the oracle instance up on that node of the cluster. I contacted HP and got... (1 Reply)
Discussion started by: BOFH
1 Replies

3. Solaris

Reading raw disk on Solaris

Hello I wonder if someone could help me in reading a raw (non-Solaris) disk on a Solaris system... I have an IDE HDD in my Sun Blade and would like to read it (using C). It appears on the system and with the format command shows up as c0t1d0. I use the dd command to read the disk as such:... (19 Replies)
Discussion started by: son_t
19 Replies

4. Solaris

How to create new partitions in solaris,from the raw disk?

Hi all, I would like to know how to make new partitions.... I currently have allocated 60G for various slices (I have totally used 4 out of 7 available slices... I am running only solaris on my box. My plan is to have entire disk dedicated to solaris and run other OS from within... (19 Replies)
Discussion started by: wrapster
19 Replies

5. UNIX for Advanced & Expert Users

Veritas Volume Manager question (Disk layout with 4 plexes)

I am trying to build a veritas volume similar to an existing volume on another server. The output on source server is: usbtor12# vxprint -hrtg appdg v anvil_sqlVOL - ENABLED ACTIVE 629145600 SELECT - fsgen pl anvil_sqlVOL-01 anvil_sqlVOL ENABLED ACTIVE 629145600... (3 Replies)
Discussion started by: momin313
3 Replies

6. Solaris

Veritas Volume Manager: disk "failed was"

Hello there, I'm going to describe a situation I've got here... feel free to ask away questions and I'll provide what I can if it'll help me get this answered! When I do a vxdisk list, I see a disk that VxVM calls "disk4" that is listed as "failed was: c1t9d0s2". When I do a format, I can go... (3 Replies)
Discussion started by: kitykity
3 Replies

7. UNIX for Dummies Questions & Answers

VERITAS Volume Manager - mirror a disk/volume

I have a machine (5.10 Generic_142900-03 sun4u sparc SUNW,Sun-Fire-V210) that we are upgrading the storage and my task is to mirror what is already on the machine to the new disk. I have the disk, it is labeled and ready but I am not sure of the next steps to mirror the existing diskgroup and... (1 Reply)
Discussion started by: rookieuxixsa
1 Replies

8. Solaris

root disk mirroring in solaris volume manager for solaris 10

Need a procedure document to do "root disk mirroring in solaris volume manager for solaris 10". I hope some one will help me asap. I need to do it production environment. Let me know if you need any deatils on this. Thanks, Rama (1 Reply)
Discussion started by: ramareddi16
1 Replies

9. AIX

Regarding AIX volume manager & replacing a disk

First a little background: I'm working with an AIX 6.1 TL05 running two mirrored SAS disks (rootvg) and four SSDs (appvg) All four SSDs belong to appvg and are setup to mirror as follows: hdisk4 --> hdisk6 (containing application fs) hdisk5 --> hdisk7 (containing database fs) A few days... (1 Reply)
Discussion started by: Michael Mullig
1 Replies

10. Solaris

Convert from raw disk to solaris volume manager disk

I have a solaris 10 system configured using NetApp as its storage, and the file systems are already configured as you can see from the example below: root@moneta # df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 9.8G 513M 9.3G 6% / ... (0 Replies)
Discussion started by: fretagi
0 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 12:56 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy