Sponsored Content
Full Discussion: Install Solaris 11 with ZFS
Operating Systems Solaris Install Solaris 11 with ZFS Post 302927914 by jlliagre on Sunday 7th of December 2014 09:11:25 AM
Old 12-07-2014
Quote:
Originally Posted by mzainal
How about raidz? Is it same as raid 5?
It is not the same, although it share some similarities.
Quote:
Can i use raidz in sparc t4?
Yes, but not for the root pool.
Quote:
---------- Post updated at 06:27 AM ---------- Previous update was at 06:01 AM ----------

Is this stripe mode?
Code:
https://community.oracle.com/thread/2488541?tstart=0

Precisely.
Quote:
Is it capale i doing this is sparc t4 machine? Now i can see 1 disk attach in zpool. The other 3 is not format yet.
You can create second pool based on raidz with the three remaining disks if you are looking for both redundancy and capacity.
You can create a stripe with these disks if you are looking for maximum capacity.
For better reliability and performance, you can also use the first disk as a mirror for the root pool and use the remaining two disks as another mirrored pool.

You are also free to use UFS with these extra disks, or just use them or partitions on them as raw devices for ldoms or kernel zones.
 

8 More Discussions You Might Find Interesting

1. Solaris

Solaris 10 ZFS

I'm typing on a nice Sunblade 100 that is willing to be a lab rat for my experiments. I installed Solaris 10 and want to mess with ZFS. Does anyone have any docs on how to install zfs or how to convert my current UFS filesystems to ZFS? Does anyone have any experiences good or bad with ZFS... (3 Replies)
Discussion started by: BG_JrAdmin
3 Replies

2. Solaris

Installing ZFS on an existing Solaris 10 install (UFS)

Hello, I am new to Solaris so i apologize upfront if my questions seem trivial. I am trying to install a ZFS file system on a Solaris 10 machine with UFS already installed on it. I want to run: # zpool create pool_zfs c0t0d0 then: # zfs create pool_zfs/fs My question is more to... (3 Replies)
Discussion started by: mcdef
3 Replies

3. Solaris

Install with /var in separate partition - Zfs / 10

This is my first time working with ZFS on Solaris 10. I am trying to set up /var in a separate partition from /. During the installation, I came across the ZFS settings where I selected disks 0 and 1 to be mirrored with ZFS. Next was the option to have /var and / on separate datasets. Is... (3 Replies)
Discussion started by: 6L71
3 Replies

4. Solaris

zfs initial install

Can anyone tell me why /export and /export/home are showing up after an install? I'm assuming it's reading from the old default filesystem layout during installs but I don't see anywhere to change that. Thanks in advance, df -hF zfs Filesystem size used avail capacity ... (0 Replies)
Discussion started by: toor13
0 Replies

5. Solaris

Jumpstart with ZFS root install

Has anyone tried Jumpstart with installing a ZFS root pool? I have read the docs (Custom Jumpstart and Advanced Install Guide(doc:821-1911);Network-Based Installations(doc:821-1909)). My issue is that I get prompted for information each time I boot the client system. My goal is to get the... (8 Replies)
Discussion started by: bluescreen
8 Replies

6. Solaris

Solaris 10 install dvd drive boots, but not recoginized by install process

I am trying to build a Sun Ultra 10 with solaris 10. This computer is one of a collection that was donated to the non-profic company I work for. All media was wiped before I recieved them, so I am starting from stratch. I downloaded the Solaris 10 ISO and burned a DVD. The computer came with a... (4 Replies)
Discussion started by: gwillhight
4 Replies

7. Solaris

dual boot solaris/solaris zfs file system

Hi, I am running into a some problems creating a dual boot system of 2 solaris instances using ZFS file system and I was wondering if someone can help me out. First some back ground. I have been asked to change the file system of our server from UFS to ZFS. Currently we are using Solaris... (3 Replies)
Discussion started by: estammis
3 Replies

8. Solaris

Install using a ZFS FLAR on x86

Hi, Been rattling my brain on this one, trying to install multiple systems using the FLASH method from a cdrom booted image s10u10, with flar image hosted on a FTP server.. System installs fine with traditional install, and can use the flash procedure using ufs image but not zfs filesystem.... (4 Replies)
Discussion started by: gt71027
4 Replies
volume-request(4)														 volume-request(4)

NAME
volume-request, volume-defaults - Solaris Volume Manager configuration information for top down volume creation with metassist SYNOPSIS
/usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml A volume request file, XML-based and compliant with the volume-request.dtd Document Type Definition, describes the characteristics of the volumes that metassist should produce. A system administrator would use the volume request file instead of providing options at the command line to give more specific instruc- tions about the characteristics of the volumes to create. A volume request file can request more than one volume, but all requested volumes must reside in the same disk set. If you start metassist by providing a volume-request file as input, metassist can implement the configuration specified in the file, can generate a command file that sets up the configuraiton for you to inspect or edit, or can generate a volume configuration file for you to inspect or edit. As a system administrator, you would want to create a volume request file if you need to reuse configurations (and do not want to reenter the same command arguments), or if you prefer to use a configuration file to specify volume characteristics. Volume request files must be valid XML that complies with the document type definition in the volume-request.dtd file, located at /usr/share/lib/xml/dtd/volume-request.dtd. You create a volume request file, and provide it as input to metassist to create volumes from the top down. Defining Volume Request The top level element <volume-request> surrounds the volume request data. This element has no attributes. A volume request requires at least one <diskset> element, which must be the first element after <volume-request>. Optionally, the <volume-request> element can include one or more <available> and <unavailable> elements to specify which controllers or disks associated with a specific controller can or cannot be used to create the volume. Optionally, the <volume-request> element can include a <hsp> element to specify characteristics of a hot spare pool if fault recovery is used. If not specified for a volume with fault-recovery, the first hot spare pool found in the disk set is used. If no hot spare pool exists but one is required, a hot spare pool is created. Optionally, the volume-request can include one or more <concat>, <stripe>, <mirror>, <volume> elements to specify volumes to create. Defining Disk Set Within the <volume-request> element, a <diskset> element must exist. The <diskset> element, with the name attribute, specifies the name of the disk set to be used. If this disk set does not exist, it is created. This element and the name attribute are required. Defining Availability Within the <volume-request> element and within other elements, you can specify available or unavailable components (disks, or disks on a specific controller path) for use or exclusion from use in a volume or hot spare pool. The <available> and <unavailable> elements require a name attribute which specifies either a full ctd name, or a partial ctd name that is used with the implied wildcard to complete the expression. For example, specifying c3t2d0 as available would look like: <available name="/dev/dsk/c3t2d0"> The <available> element also makes any unnamed components unavailable. Specifying all controllers exept c1 unavailable would look like: <available name="c1"> Specifying all disks on controller 2 as unavailable would look like: <unavailable name="c2"> The <unavailable> element can also be used to further restrict the list of available components. For example, specifying all controllers exept c1 unavailable, and making all devices associated with c1t2 unavailable as well would look like this: <available name="c1"> <unavailable name="c1t2"> Components specified as available must be either part of the named disk set used for this volume creation, or must be unused and not in any disk set. If the components are selected for use, but are not in the specified diskset, the metassist command automatically adds them to the diskset. It is unnecessary to specify components that are in other disk sets as unavailable. metassist automatically excludes them from considera- tion. However, unused components or components that are not obviously used (for example, an unmounted slice that is reserved for different uses) must be explicitly specified as unavailable, or the metassist command can include them in the configuration. Defining Hot Spare Pool The next element within the <volume-request> element, after the <diskset> and, optionally, <available> and <unavailable> elements, is the <hsp> element. Its sole attribute specifies the name of the hot spare pool: <hsp name="hsp001"> The hot spare pool names must start with hsp and conclude with a number, thus following the existing Solaris Volume Manager hot spare pool naming requirements. Within the <hsp> element, you can specify one or more <available> and <unavailable> elements to specify which disks, or disks associated with a specific controller can or cannot be used to create the hot spares within the pool. Also within the <hsp> element, you can use the <slice> element to specify hot spares to be included in the hot spare pool (see DEFINING SLICE). Depending on the requirements placed on the hot spare pool by other parts of the volume request, additional slices can be added to the hot spare pool. Defining Slice The <slice> element is used to define slices to include or exclude within other elements. It requires only a name attribute to specify the ctd name of the slice, and the context of the <slice> element determines the function of the element. Sample slice elements might look like: <slice name="c0t1d0s2" /> <slice name="c0t12938567201lkj29561sllkj381d0s2" /> Defining Stripe The <stripe> element defines stripes (interlaced RAID 0 volumes) to be used in a volume. It can contain either slice elements (to explic- itly determine which slices are used), or appropriate combinations of available and unavailable elements if the specific determination of slices is to be left to the metassist command. The <stripe> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected from available Solaris Volume Manager names. If possible, names for related components are related. The <stripe> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If slices for the <stripe> are explicitly specified, the size attribute is ignored. The <available> and <unavailable> elements can be used to constrain slices for use in a stripe. The <stripe> elements takes optional mincomp and maxcomp attributes to specify both the minimum and maximum number of components that can be included in it. As with size, if slices for the <stripe> are explicitly specified, the mincomp and maxcomp attributes are ignored. The <stripe> elements takes an optional interlace attribute as value and units (for example, 16KB, 5BLOCKS, 20KB). If this value is not specified, the Solaris Volume Manager default value is used. The <stripe> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with this component. This attribute is specified as a boolean value, as usehsp="TRUE". If the component is not a submirror, this attribute is ignored. Defining Concat The <concat> element defines concats (non-interlaced RAID 0 volumes) to be used in a configuration. It is specified in the same way as a <stripe> element, except that the mincomp, maxcomp, and interlace attributes are not valid. Defining Mirror The <mirror> element defines mirrors (RAID 1 volumes) to be used in a volume configuration. It can contain combinations of <concat> and <stripe> elements (to explicitly determine which volumes are used as submirrors). Alternatively, it can have a size attribute specified, along with the appropriate combinations of available and unavailable elements to leave the specific determination of components to the metassist command. The <mirror> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <mirror> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If <stripe> and <concat> elements for the mirror are not specified, this attribute is required. Otherwise, it is ignored. The <mirror> element takes an optional nsubmirrors attribute to define the number of submirrors (1-4) to include. Like the size attribute, this attribute is ignored if the underlying <concat> and <stripe> submirrors are explicitly specified. The <mirror> element takes an optional read attribute to define the mirror read options (ROUNDROBIN, GEOMETRIC, or FIRST) for the mirror. If this attribute is not speci- fied, the Solaris Volume Manager default value is used. The <mirror> element takes an optional write attribute to define the mirror write options (PARALLEL, SERIAL, or FIRST) for the mirror. If this attribute is not specified, the Solaris Volume Manager default value is used. The <mirror> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with each submirror. This attribute is specified as a boolean value, as usehsp="TRUE". If the usehsp attribute is specified in the configuration of the <stripe> or <concat> element used as a submirror, it overrides the value of usehsp attributes for the mirror as a whole. Defining Volume by Quality of Service The <volume> element defines volumes (high-level) by the quality of service they should provide. (The <volume> element offers the same functionality that options on the metassist command line can provide.) The <volume> element can contain combinations of <available> and <unavailable> elements to determine which components can be included in the configuration. The <volume> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <volume> element takes a required size attribute that specifies the size as value and units (for example, 10TB, 5GB). The <volume> element takes an optional redundancy attribute to define the number of additional copies of data (1-4) to include. In a worst- case scenario, a volume can suffer failure of n-1 components without data loss, where redundancy=n. With fault recovery options, the volume could withstand up to n+hsps-1 non-concurrent failures without data loss. Specifying redundancy=0 results in a RAID 0 volume being created (a stripe, specifically). The <volume> element takes an optional faultrecovery attribute to determine if additional components should be allocated to recover from component failures in the volume. This is used to determine whether the volume is associated with a hot spare pool. The faultrecovery attribute is a boolean attribute, with a default value of FALSE. The <volume> element takes an optional datapaths attribute to determine if multiple data paths should be required to access the volume. The datapaths attribute should be set to a numeric value. Defining Default Values Globally Global defaults can be set in /etc/default/metassist.xml. This volume-defaults file can contain most of the same elements as a volume- request file, but differs structurally from a volume-request file: o The container element must be <volume-defaults>, not <volume-request>. o The <volume-defaults> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. Attributes specified by these elements define global default values, unless overridden by the corresponding attributes and elements in a volume-request. None of these elements is a container element. o The <volume-defaults> element can contain one or more <diskset> elements to provide disk set-specific defaults. The <diskset> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. o Settings specified outside of a <diskset> element apply to all disk sets, but can be overridden within each <diskset> element. Example 1: Creating a Redundant Volume The following example shows a volume request file used to create a redundant and fault tolerant volume of 1TB. <volume-request> <diskset name="sparestorage"/> <volume size="1TB" redundancy="2" faultrecovery="TRUE"> <available name="c2" /> <available name="c3" /> <unavailable name="c2t2d0" /> </volume> </volume-request> Example 2: Creating a Complex Configuration The following example shows a sample volume-request file that specifies a disk set name, and specifically itemizes characteristics of com- ponents to create. <volume-request> <!-- Specify the disk set to use --> <diskset name="mailspool"/> <!-- Generally available devices --> <available name="c0"/> <!-- Create a 3-way mirror with redundant datapaths and HSPs / via QoS --> <volume size="10GB" redundancy="3" datapaths="2" / faultrecovery="TRUE"/> <!-- Create a 1-way mirror with a HSP via QoS --> <volume size="10GB" faultrecovery="TRUE"/> <!-- Create a stripe via QoS --> <volume size="100GB"/> </volume-request> BOUNDARY VALUES
Attribute Minimum Maximum mincomp 1 N/A maxcomp N/A 32 nsubmirrors 1 4 passnum 0 9 datapaths 1 4 redundancy 0 4 /usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml metassist(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metare- cover(1M), metareplace(1M), metaroot(1M), metaset(1M), metasync(1M), metattach(1M), mount_ufs(1M), mddb.cf(4) 27 Apr 2005 volume-request(4)
All times are GMT -4. The time now is 10:58 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy