Sponsored Content
Operating Systems Solaris How to create metadb with zpool in Solaris 11 Post 302553586 by DukeNuke2 on Thursday 8th of September 2011 03:02:38 AM
Old 09-08-2011
also your system seems to be a cluster and if you need to share the disks between the hosts, a metaset or a zpool is the only way (beside a vxvm volume) to achieve this...
 

10 More Discussions You Might Find Interesting

1. Solaris

solaris with metadb

Hi All, If solaris has metadb services on the disk, it means that it has a HW raid controller. Or what. Thanks in advance. (1 Reply)
Discussion started by: itik
1 Replies

2. Solaris

How to create metadb when there is no free slice

Hi All, I have to do disk mirroring, for that I have to create a metadb as disk mirroring have to do with SVM. However I do not have any slice free for metadb. What are the options? Please suggest (4 Replies)
Discussion started by: kumarmani
4 Replies

3. Solaris

zpool create - long time creation

Hello, I have a 520GB SAN resource connected to the server via FC (4Gbit). In the next step he wants to create pool storage: After 15 hours Pool has not yet created... Is such a long waiting time is normal? Should I have done? Regards (1 Reply)
Discussion started by: bieszczaders
1 Replies

4. Solaris

Increase root filesystem on solaris zone using zpool

I have a solaris zone of 12 GB and i have to increase the / filesystem to 31GB as requested. Earlier I had expanded filesystems other than / by setting quota to new value like "zfs set quota=new value mountpoint" but I am not sure whether its a good practice in zfs because by default in my... (5 Replies)
Discussion started by: vikkash
5 Replies

5. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

6. Solaris

Check Solaris VM Databases metadb does not have enough information about logical volumes

Check Solaris VM Databases metadb does not have enough information about logical volumes. Current value is 0% I have checked the SVM status, all disks are good state and synched perfectly. no errors in metadb -i. what is this alert exact mean? what we have to check for the value? Please... (1 Reply)
Discussion started by: Naveen.6025
1 Replies

7. Solaris

Upgrading Solaris - what happens to zpool/zfs versions

Hi everyone, I'm hoping someone can help me out here. I've googled lots and don't think I can find an easy answer to this. We're in the process of upgrading Solaris from v10 5/08 to v10 9/10. The zpools for luns are currently at version 10, and I understand Solaris v10 9/10 has support for... (3 Replies)
Discussion started by: badoshi
3 Replies

8. UNIX for Advanced & Expert Users

Solaris 10: I forgot to detach a zone before zpool export. Uninstall zone?

Dear all, recently, I migrated a solaris zone from one host to another. The zone was inside of a zpool. The zpool cotains two volumes. I did the following: host1: $ zlogin zone1 shutdown -y -g0 -i0 #Zone status changes from running to installed $ zpool export zone1 host2: $ zpool... (2 Replies)
Discussion started by: custos
2 Replies

9. Solaris

Solaris 10 Volume Manager - adding slice to metadb

Hi all, I added a new disk slice to the current metadb. Below is what I see bash-3.2# metadb -i flags first blk block count a m p luo 16 8192 /dev/dsk/c0t0d0s7 a p luo 8208 8192 ... (3 Replies)
Discussion started by: javanoob
3 Replies

10. Solaris

Solaris can not boot because metadb lost

how to recovery metadb? Thanks! (1 Reply)
Discussion started by: dzung
1 Replies
mediator(7D)							      Devices							      mediator(7D)

NAME
mediator - support for HA configurations consisting of two strings of drives DESCRIPTION
Beginning with a prior version, Solaris Volume Manager provided support for high-availability (HA) configurations consisting of two hosts that share at least three strings of drives and that run software enabling exclusive access to the data on those drives from one host. (Note: Volume Manager, by itself, does not actually provide a high-availability environment. The diskset feature is an enabler for HA con- figurations.) Volume Manager provides support for a low-end HA solution consisting of two hosts that share only two strings of drives. The hosts in this type of configuration, referred to as mediators, run a special daemon, rpc.metamedd(1M). The mediator hosts take on additional responsibil- ities to ensure that data is available in the case of host or drive failures. In a mediator configuration, two hosts are physically connected to two strings of drives. This configuration can survive the failure of a single host or a single string of drives, without administrative intervention. If both a host and a string of drives fail (multiple fail- ures), the integrity of the data cannot be guaranteed. At this point, administrative intervention is required to make the data accessible. The following definitions pertain to a mediator configuration: diskset A set of drives containing metadevices and hot spares that can be shared exclusively (but not concurrently) by two hosts. Volume Manager state database A replicated database that stores metadevice configuration and state information. mediator host A host that runs the rpc.metamedd(1M) daemon and that has been added to a diskset. The mediator host participates in checking the state database and the mediator quorum. mediator quorum The condition achieved when the number of accessible mediator hosts is equal to half+1 the total number of configured mediator hosts. Because it is expected that there will be two mediator hosts, this number will normally be 2 ([(2/2) + 1] = 2.) replica A single copy of the Volume Manager metadevice state database. replica quorum The condition achieved when the number of accessible replicas is equal to half+1 the total number of configured replicas. For example, if a system is configured with ten replicas, the quorum is met when six are accessible ([(10/2) + 1 = 6]). A mediator host running the rpc.metamedd(1M) daemon keeps track of replica updates. As long as the following conditions are met, access to data occurs without any administrative intervention: o The replica quorum is not met. o Half of the replicas are still accessible. o The mediator quorum is met. The following conditions describe the operation of mediator hosts: 1. If the is met, access to the diskset is granted. At this point no mediator host is involved. 2. If the replica quorum is not met, half of the replicas are accessible, the mediator quorum is met, and the replica and mediator data match, access to the diskset is granted. The mediator host contributes the deciding vote. 3. If the replica quorum is not met, half of the replicas are accessible, the mediator quorum is not met, half of the mediator hosts is accessible, and the replica and mediator data match, the system prompts you to grant or deny access to the diskset. 4. If the replica quorum is not met, half of the replicas are accessible, the mediator quorum is met, and the replica and mediator data do not match, access to the diskset is read-only. You can delete replicas, release the diskset, and retake the diskset to gain read-write access to the data in the diskset. 5. In all other cases, the diskset access is read-only. You can delete replicas, release the diskset, and retake the diskset to gain read-write access to the data in the diskset. The metaset(1M) command administers disksets and mediator hosts. The following options to the metaset command pertain only to administering mediator hosts. -a -m mediator_host_list Adds mediator hosts to the named set. A mediator_host_list is the nodename of the mediator host to be added and up to 2 other aliases for the mediator host. The nodename and aliases for each mediator host are separated by commas. Up to 3 mediator hosts can be specified for the named diskset. -d -m mediator_host_list Deletes mediator hosts from the named diskset. Mediator hosts are deleted from the diskset by specifying the nodename of mediator host to delete. -q Displays an enumerated list of tags pertaining to ``tagged data'' that may be encountered during a take of the ownership of a diskset. -t [-f] -y Takes ownership of a diskset safely, unless -f is used, in which case the take is unconditional. If metaset finds that another host owns the set, this host will not be allowed to take ownership of the set. If the set is not owned by any other host, all the disks within the set will be owned by the host on which metaset was exe- cuted. The metadevice state database is read in and the shared metadevices contained in the set become accessi- ble. The -t option will take a diskset that has stale databases. When the databases are stale, metaset will exit with code 66, and a message will be printed. At that point, the only operations permitted are the addition and deletion of replicas. Once the addition or deletion of the replicas has been completed, the diskset should be released and retaken to gain full access to the data. If mediator hosts have been configured, some addi- tional exit codes are possible. If half of the replicas and half of the mediator hosts are operating properly, the take will exit with code 3. At this point, you can add or delete replicas, or use the -y option on a subse- quent take. If the take operation encounters ``tagged data,'' the take operation will exit with code 2. You can then run the metaset command with the -q option to see an enumerated list of tags. -t [-f] -u tagnumber Once a tag has been selected, a subsequent take with -u tagnumber can be executed to select the data associ- ated with the given tagnumber. SEE ALSO
metaset(1M), md(7D), rpc.metamedd(1M), rpc.metad(1M) Sun Cluster documentation, Solaris Volume Manager Administration Guide NOTES
Diskset administration, including the addition and deletion of hosts and drives, requires all hosts in the set to be accessible from the network. SunOS 5.11 20 Jun 2008 mediator(7D)
All times are GMT -4. The time now is 09:11 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy