Sponsored Content
Operating Systems Solaris Can I create virtual disk from zpool on Solaris 11.4 for OVM? Post 303045090 by solaris_1977 on Wednesday 11th of March 2020 07:14:44 PM
Old 03-11-2020
Can I create virtual disk from zpool on Solaris 11.4 for OVM?

Hello,

I am here again, with another issue.

I am setting up a new Oracle VM setup (GUI). On the backend, it is ldom concept, but GUI seems to have an easier interface.
For now, we don't have any SFP module and these two S7 Sparc servers are not connected to Storage, so I have to live with local disks only. Now the limitation comes here.
I will create one repository with one local disk and can create VM's from that. That means, VM will be sitting on that repository, which is made of local disk. If ever, I want to replace that local disk (due to hard/transport/bad errors), I will not be able to migrate these VMs to another repository/pool because these VMs are on local disk. This will be a bad design.
Even if I create zpool with two disk, I thought I should be able to see that pool in OVM. But from Oracle VM perspective GUI Manager does not recognize zpools, only disk. I created a case with Oracle for best practice to be used for this and they responded -- "OVM is not designed to manage redundancy with local disks. That is why Oracle VM is capable to handle SAN Storage over the network or even fiber channel. Now from Solaris perspective, I think that you can manually assign zpool as virtual disk to the LDOMs guest, but you need to do research on this if this is possible"

I tried to google and didn't find any relevant note, if I can create virtual disks from zpool.

Another idea came in my mind was, if I can create hardware RAID, then OS will see that single disk, but then I found this link - Hardware RAID Support -
SPARC and Netra SPARC S7-2 Series Servers Administration Guide
and it denies it.

Any suggestions or ideas, please ?

Thanks
 

10 More Discussions You Might Find Interesting

1. Solaris

How to create new partitions in solaris,from the raw disk?

Hi all, I would like to know how to make new partitions.... I currently have allocated 60G for various slices (I have totally used 4 out of 7 available slices... I am running only solaris on my box. My plan is to have entire disk dedicated to solaris and run other OS from within... (19 Replies)
Discussion started by: wrapster
19 Replies

2. Shell Programming and Scripting

Virtual disk to create and partition

I have to do this exercise: Create a virtual disk Partition this disk Create File system Mount File System I'm using Minix (which runs by Qemu as guest machine) on Linux (Host) Is there anybody who knows how to solve first three point? :confused: Thanks (4 Replies)
Discussion started by: Guccio
4 Replies

3. Solaris

How to create mirror disk in solaris machine?

hi, I'm newbie in Solaris 10. can someone explain me the steps of how to create mirror disk in Solaris machine. thanks in advance (5 Replies)
Discussion started by: Wong_Cilacap
5 Replies

4. Solaris

create Virtual NIC in Solaris 10

Hi All, does any body know how to create Virtual NIC in Solaris 10 if any one have good article or reference kindly provide me i try to Google but i didn't find good one (7 Replies)
Discussion started by: jamisux
7 Replies

5. Solaris

How to create virtual disks in solaris

Hi, I have installed oracle 10g release 2 on solaris 10 Zone. I want to configure ASM in local Zone using virtual disks in place of real disks. I have configured ASM using virtual disks in place real disk in Solaris 10 Global zone. How i can do in local Zone Kindly guid me with proper... (1 Reply)
Discussion started by: malikshahid85
1 Replies

6. Solaris

How to create metadb with zpool in Solaris 11

Hi, my root pool is as follows. How can I create a metadb if I want to create SVM volumes? zpool status pool: rpool1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool1 ONLINE 0 0 0 c4t1d0s0 ... (10 Replies)
Discussion started by: incredible
10 Replies

7. Solaris

Create a boot disk mirror on Solaris 10 x86

I’m setting up a boot disk mirror on Solaris 10 x86. I’m used to doing it on SPARC, where you can copy the partition table using fmthard. My x86 boot disk has 2 primary partitions, a Solaris one and a diagnostic one. Is there a way to copy those 2 primary partitions to the second disk without... (6 Replies)
Discussion started by: TKD
6 Replies

8. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

9. Solaris

Solaris 10 virtual disk (ramdisk) create for sun4v (T-2000 simulator) architecture

have been trying to create a 2 GB ramdisk (virtual) to run on my T-2000 simulator (Legion) which has sun4v architecture. I have a SPARC workstation which runs on sun4u architecture with Solaris 10. I have created a ramdisk image using dd command, newfs, then used ufsrestore to restore the... (3 Replies)
Discussion started by: Zam_1234
3 Replies

10. Solaris

Replace zpool with another disk

issue, I had a zpool which was full pool_temp1 199G 197G 1.56G 99% ONLINE - pool_temp2 199G 196G 3.09G 98% ONLINE - as you can see, full so I replaced with a larger disk. zpool replace pool_temp1 c3t600144F0FF8BA036000058CC1DB80008d0s0... (2 Replies)
Discussion started by: rrodgers
2 Replies
vxsparecheck(1M)														  vxsparecheck(1M)

NAME
vxsparecheck - monitor Veritas Volume Manager for failure events and replace failed disks SYNOPSIS
/etc/vx/bin/vxsparecheck [mail-address...] DESCRIPTION
The vxsparecheck command monitors Veritas Volume Manager (VxVM) by analyzing the output of the vxnotify command, waiting for failures to occur. It then sends mail via mailx to the logins specified on the command line, or (by default) to root. It then replaces any failed disks. After an attempt at replacement is complete, mail will be sent indicating the status of each disk replacement. The mail notification that is sent when a failure is detected follows this format: Failures have been detected by the Veritas Volume Manager: failed disks: medianame ... failed plexes: plexname ... failed subdisks: subdiskname ... failed volumes: volumename ... The Volume Manager will attempt to find hot-spare disks to replace any failed disks and attempt to reconstruct any data in volumes that have storage on the failed disk. The medianame list specifies disks that appear to have completely failed. The plexname list show plexes of mirrored volumes that have been detached due to I/O failures experienced while attempting to do I/O to subdisks they contain. The subdiskname list specifies subdisks in RAID-5 volumes that have been detached due to I/O errors. The volumename list shows non-RAID-5 volumes that have become unusable because disks in all of their plexes have failed (and are listed in the ``failed disks'' list) and shows those RAID-5 volumes that have become unusable because of multiple failures. If any volumes appear to have failed, the following paragraph will be included in the mail: The data in the failed volumes listed above is no longer available. It will need to be restored from backup. Replacement Procedure After mail has been sent, vxsparecheck finds a hot spare replacement for any disks that appear to have failed (that is, those listed in the medianame list). This involves finding an appropriate replacement for those eligible hot spares in the same disk group as the failed disk. A disk is eligible as a replacement if it is a valid Veritas Volume Manager disk (VM disk), has been marked as a hot-spare disk and con- tains enough space to hold the data contained in all the subdisks on the failed disk. To determine which disk from among the eligible hot spares to use, vxsparecheck first checks the file /etc/vx/sparelist (see Sparelist File below). If this file does not exist or lists no eligible hot spares for the failed disk, the disk that is ``closest'' to the failed disk is chosen. The value of ``closeness'' depends on the controller, target and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller; and a disk under the same target as the failed disk is closer than one under a different target. If no hot spare disk can be found, the following mail is sent: No hot spare could be found for disk medianame in diskgroup. No replacement has been made and the disk is still unusable. The mail then explains the disposition of volumes that had storage on the failed disk. The following message lists disks that had storage on the failed disk, but are still usable: The following volumes have storage on medianame: volumename These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures. If any non-RAID-5 volumes were made unusable due to the failure of the disk, the following message is included: The following volumes: volumename have data on medianame but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. If any RAID-5 volumes were made unavailable due to the disk failure, the following message is included The following RAID-5 volumes: volumename had storage on medianame and have experienced other failures. These RAID-5 volumes are now unusable and data on them is unavailable. If a hot-spare disk was found, a hot-spare replacement is attempted. This involves associating the device marked as a hot spare with the media record that was associated with the failed disk. If this is successful, the vxrecover(1M) command is used in the background to recover the contents of any data in volumes that had storage on the disk. If the hot-spare replacement fails, the following message is sent: Replacement of disk medianame in group diskgroup failed. The error is: error message If any volumes (RAID-5 or otherwise) are rendered unusable due to the failure, the following message is included: The following volumes: volumename occupy space on the failed disk and have no other available mirrors or have experienced other failures. These volumes are unusable, and the data they contain is unavailable. If the hot-spare replacement procedure completed successfully and recovery is under way, a final mail message is sent: Replacement of disk medianame in group diskgroup with disk device sparedevice has successfully completed and recovery is under way. If any non-RAID-5 volumes were rendered unusable by the failure despite the successful hot-spare procedure, the following message is included in the mail: The following volumes: volumename occupy spare on the replaced disk, but have no other enabled mirrors on other disks from which to perform recovery. These volumes must have their data restored. If any RAID-5 volumes were rendered unusable by the failure despite the successful hot-spare procedure, the following message is included in the mail: The following RAID-5 volumes: volumename have subdisks on the replaced disk and have experienced other failures that prevent recovery. These RAID-5 volumes must have their data restored. If any volumes (RAID-5 or otherwise) were rendered unusable, the following message is also included: To restore the contents of any volumes listed above, the volume should be started with the command: vxvol -f start volumename and the data restored from backup. Sparelist File The sparelist file is a text file that specifies an ordered list of disks to be used as hot spares when a specific disk fails. The system- wide sparelist file is located in /etc/vx/sparelist. Each line in the sparelist file specifies a list of spares for one disk. Lines beginning with the pound (#) character and empty lines are ignored. The format for a line in the sparelist file is: [ diskgroup:] diskname : spare1 [ spare2 ... ] The diskgroup field, if present, specifies the disk group within which the disk and designated spares reside. If this field is not speci- fied, the default disk group is determined using the rules given in the vxdg(1M) manual page. The diskname specifies the disk for which spares are being designated. The spare list after the colon lists the disks to be used as hot spares. The list is order dependent; in case of failure of diskname, the spares are tried in order. A spare will be used only if it is a valid hot spare (see above). If the list is exhausted without finding any spares, the default policy of using the closest disk is used. FILES
/etc/vx/sparelist Specifies a list of disks to serve as hot spares for a disk. NOTES
The sparelist file is not checked in any way for correctness until a disk failure occurs. It is possible to inadvertently specify a non- existent disk or inappropriate disk or disk group. Malformed lines are also ignored. SEE ALSO
mailx(1), vxintro(1M), vxnotify(1M), vxrecover(1M), vxrelocd(1M), vxunreloc(1M) VxVM 5.0.31.1 24 Mar 2008 vxsparecheck(1M)
All times are GMT -4. The time now is 02:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy