Sponsored Content
Operating Systems Solaris Solaris11: Permission issues with auto-scrub ZFS pool Post 302754515 by GP81 on Thursday 10th of January 2013 04:32:30 PM
Old 01-10-2013
Here is very usefull blog about RBAC and how you can provide root privileges with pfexec. I'm not an author.
Less known Solaris features: pfexec - c0t0d0s0.org
Less known Solaris features: RBAC and Privileges - c0t0d0s0.org

It's about Solaris10. I didn't use RBAC in s11 but as I can see there is no built-in Primary Administrator profile. I think that you can create profile approprite for your needs for example just with zfs command.

There is profile related to filesystem management.
Code:
root@solaris11:/etc/security/exec_attr.d# grep zfs *
core-os:ZFS File System Management:solaris:cmd:RO::/usr/sbin/zfs:euid=0

You can try if they can meet your needs.

I have tested and it looks ok for creating zfs filesystem.
Code:
user1@solaris11:~$ profiles
          Basic Solaris User
          All
user1@solaris11:~$ pfexec zfs create pula01/test
cannot create 'pula01/test': permission denied

Code:
root@solaris11 # usermod -P +'ZFS File System Management' user1

Code:
user1@solaris11:~$ profiles
          ZFS File System Management
          Basic Solaris User
          All
user1@solaris11:~$ zfs create pula01/test
cannot create 'pula01/test': permission denied
user1@solaris11:~$ pfexec zfs create pula01/test

Another edit Smilie
ZFS File System Management works fine for zfs command but for zpool command you should use different profile:
Code:
root@solaris11 # usermod -P +"ZFS Storage Management" user1

And then zpool scrub works fine too Smilie

Last edited by GP81; 01-11-2013 at 07:07 AM..
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

7. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

8. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

9. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

10. UNIX for Beginners Questions & Answers

Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL). Using the following commands below I have successfully mounted the image file ready to be opened by zpool sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies
scrub-files(1)							   GNU Telephony						    scrub-files(1)

NAME
scrub-files - securely erase files by filling with random data first. SYNOPSIS
scrub [options] paths... DESCRIPTION
This command is used to securely erase files. This is accomplished by filling the file with random data in pre-sized chunks. Multiple passes of random data may also be used. The pre-sized chunks are used to remove information about exact original file size. Other options include random renaming of the original file before deletion and the use of truncation to break down meta-data on what blocks in the file system were originally associated with a securely deleted file. This is specifically intended to make it harder to perform forensic analy- sis on securely erased files. OPTIONS
--blocksize size Set the default block size (in 1 k increments) for scrub-files to use when writing random data. This effects both the final file length, which will be aligned to the specified size, and the way the truncate option decomposes files. The default is 1k. --follow Dereference and follow symlinks, erasing the target file. --passes=count The number of passes used when writing random data. The default is 1 pass. --recursive If argument is a directory, recursively scan directory and any subdirectory contents as arguments. --rename Rename the file randomly before deletion to clear persistant inode data. --truncate Decompose the file through truncation to break down file system page maps. --verbose Display each file being processed to the console. --help Outputs help screen for the user. AUTHOR
scrub-files was written by David Sugar <dyfet@gnutelephony.org>. REPORTING BUGS
Report bugs to bug-commoncpp@gnu.org. COPYRIGHT
Copyright (C) 2010 David Sugar, Tycho Softworks. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU- LAR PURPOSE. GNU uCommon January 2010 scrub-files(1)
All times are GMT -4. The time now is 08:51 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy