Sponsored Content
Operating Systems Solaris Splitting rpool mirror disk in ZFS Post 302820853 by solaris_1977 on Thursday 13th of June 2013 02:17:39 PM
Old 06-13-2013
Splitting rpool mirror disk in ZFS

Hi,

I have Solaris-10 (release-7) box without any non global zone. I have a very critical change after couple of days from application side. For safer side, we want to keep few level of backups just in case of faliure. This server is having three pool
Code:
root@prod_ddoa01:/# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
cvs_app_pool 165G 159G 6.50G 96% ONLINE -
iw_app_pool 1.11T 739G 400G 64% ONLINE -
rpool 136G 109G 26.6G 80% ONLINE -
root@prod_ddoa01:/#

I will create a lu copy with lucreate. Apart from it, can I pull one disk out of the server while I shut down the server ? Just in case of failure of application, I will pull out disk_0 and insert disk_1 (which was pulled out) and bring server up. Does it make sense ?
Also, I am not sure if lu will conflict with this procedure. And if I follow it, do I need to run some commands to break zfs rpool mirror ?

Last edited by solaris_1977; 06-13-2013 at 11:17 PM..
 

9 More Discussions You Might Find Interesting

1. Solaris

zfs mirror disk

Is it possible to create a Mirror with zfs ?? I'm experimented user with Solstice Disk suite. Or Sun Volume manager or veritas volume manager. But, i would like switch from Disksuite to Zfs. All my mirrored disks. (1 Reply)
Discussion started by: simquest
1 Replies

2. Solaris

ZFS Mirror versus Hardware Mirror

I've looked a little but haven't found a solid answer, assuming there is one. What's better, hardware mirroring or ZFS mirroring? Common practice for us was to use the raid controllers on the Sun x86 servers. Now we've been using ZFS mirroring since U6. Any performance difference? Any other... (3 Replies)
Discussion started by: Lespaul20
3 Replies

3. Solaris

ZFS rpool physical disk move

I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware. I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror. Are there any... (1 Reply)
Discussion started by: Metasin
1 Replies

4. AIX

Clone or mirror your AIX OS larger disk to smaller disk ?

hello folks, I have a 300GB ROOTVG volume groups with one filesystem /backup having 200GB allocated space Now, I cannot alt disk clone or mirrorvg this hdisk with another smaller disk. The disk size has to be 300GB; I tried alt disk clone and mirrorvg , it doesn't work. you cannot copy LVs as... (9 Replies)
Discussion started by: filosophizer
9 Replies

5. HP-UX

What is the difference between DRD and Root Mirror Disk using LVM mirror ?

what is the difference between DRD and Root Mirror Disk using LVM mirror ? (3 Replies)
Discussion started by: maxim42
3 Replies

6. Solaris

Zfs rpool size

Hi everyone, I am doing housekeeping of my Solaris 11 for zfs snapshot to reduce the snapshot size. I have already cleared the / file system, however the rpool size still not reduced. Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris-2 98G 6.9G ... (2 Replies)
Discussion started by: freshmeat
2 Replies

7. Solaris

Need help to understand zfs rpool space allocation

Hi, I am unable to understand that, in one of my servers while df -kh Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris-2 98G 11G 29G 29% / Even the Root FS filled on 40gb and system becomes unstable. it is showing... (4 Replies)
Discussion started by: anuragr
4 Replies

8. Solaris

ZFS: /system/zones "respawning" on rpool

Hi, I have this fresh installation of Solaris 11.3 sparc. I have two zfs pools both using two disks in mirroring mode, both are online. I want to move /system/zones, currently rpool/VARSHARE/zones, from rpool to the other zfs pool so my zones don't consume space on the disks allocated to... (1 Reply)
Discussion started by: X96
1 Replies

9. Solaris

Trying to create ZFS slice on rpool

I have a 240GB disk as rpool. I have installed Solaris 11.3 to a partition which is 110GB. Now I have another 130GB which is unallocated. I want to use that additional space as a temporary folder to be shared between Solaris and Linux. The additional space had no /dev/dsk/c2t4... entry so I used... (8 Replies)
Discussion started by: kebabbert
8 Replies
OPTIUPS(8)							    NUT Manual								OPTIUPS(8)

NAME
optiups - Driver for Opti-UPS (Viewsonic) UPS and Zinto D (ONLINE-USV) equipment NOTE
This man page only documents the hardware-specific features of the optiups driver. For information about the core driver, see nutupsdrv(8). SUPPORTED HARDWARE
optiups was originally written against a PowerES 280es in nut-0.45. It was revised for nut-2.0.1 and tested against a PowerES 420E. It is expected to work with at least the PowerES, PowerPS, and PowerVS models. This driver additionally supports a Zinto D from ONLINE USV-Systeme AG because of their very similar commands, but it is unknown if it also works with other UPS from them. This driver will not work with the PowerES stock serial cable. You will need to construct your own three conductor cable: UPS 6 -> PC 3 UPS 9 -> PC 2 UPS 4 -> PC 5 The cable for Online-USV uses pin UPS 7 (not UPS 4) -> PC 5. EXTRA ARGUMENTS
This driver supports the following optional settings in the ups.conf(5) file: status_only Only poll for critical status information. Without this, optiups (and all NUT drivers) poll all sorts of information from the UPS fairly often. It is probably not often enough to hurt anything, so this option probably is not very useful, unless you have a flaky serial connection or a highly loaded machine. nowarn_noimp Does not print warnings when the UPS reports that a variable is not implemented or not pollable. Without the option you will get a message sent to your system logs each time NUT polls the UPS. If you specify nowarn_noimp, this message will only be logged once. fake_lowbatt This forces the low battery flag true. Without it, if you want to test your UPS, you will have to unplug it and wait until the battery drops to a low/critical voltage level before NUT will respond and power down your system. With the flag, NUT should power down the system soon after you pull the plug. When you are done testing, you should remove this flag. For basic shutdown configuration testing, the command upsmon -c fsd is preferred. powerup Zinto D from ONLINE-USV cannot be identified when switched to standby. Set this flag to allow the driver to power-up your Zinto UPS. This will also power-up your equipment connected to the UPS! BUGS
On the 420E, ups.serial and ups.temperature are unsupported features. This is not a bug in NUT or the NUT driver, just the way things are with this UPS. AUTHOR
Russell Kroll, Scott Heavner, Matthias Goebl SEE ALSO
The core driver: nutupsdrv(8) Internet resources: The NUT (Network UPS Tools) home page: http://www.networkupstools.org/ Network UPS Tools 05/21/2012 OPTIUPS(8)
All times are GMT -4. The time now is 12:39 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy