Solaris11: Permission issues with auto-scrub ZFS pool
Short version:
fails saying I do not have permission to perform that action. Apparently scrub is not one of the pfexec allowed actions. Any idea on how to get around it?
Long version:
I got tired of manually running scrubs and am trying to set it to happen automatically.
Seems simple enough to set up a cron job for it (once google informed me of the existance of cron :P).
Wanting to test it out and isolate issues and such, and based on my experience I figured the best way to do so is to use a script.
Simply create a new file /usr/scripts/scrub.sh which contains:
But that doesn't work, no permissions. I verified it by trying just typing
by itself.
I could modify the script to remove pfexec instances and then I just need to schedule the script to run as an administrator. Which I don't know how to do.
Change the word root to any suitable admin username. NOTE: cron does not exec /etc/profile nor does it run .profile for the user in question. In other words your environment settings (PATH, etc) in cron are p[robably wrong. For any user. You have to add the environment from inside the script. This one change alone can fix a lot of problems in cron scripts. su - [username] does log the user in correctly.
This User Gave Thanks to jim mcnamara For This Post:
As what user are you trying to execute this command?
If it is regular user, then you must assign appropriate profile to that user account.
How do I do that? (Or rather, what terms should I google for to find the correct manuals to read to find the answer; is there a good solaris wiki you can recommend?)
---------- Post updated at 11:19 ---------- Previous update was at 11:06 ----------
Quote:
Originally Posted by jim mcnamara
Change the word root to any suitable admin username.
tested by typing this in regular user terminal and got asked for the password for root, I have it of course but it would be unsuitable for automatic scheduling. Is there a way I could, as root, give permissions to a regular user to use a certain normally reserved for root command?
Quote:
NOTE: cron does not exec /etc/profile nor does it run .profile for the user in question. In other words your environment settings (PATH, etc) in cron are p[robably wrong. For any user. You have to add the environment from inside the script. This one change alone can fix a lot of problems in cron scripts. su - [username] does log the user in correctly.
Thank you, I haven't actually even gotten around to using cron yet, my previous errors were in trying to simply run a script I called "scrub.sh". That way I could isolate errors, if I have a script file that I have tested to work when I manually run it, then when I have a scheduler run it and it doesn't work I can be sure the problem is with the scheduler.
So to clarify, I broke down what I wanted to do into steps; my "project" plan was very simple, merely 2 steps:
A. Create file "scrub.sh" which when run starts scrub on all pools. Make a shortcut for it on desktop to double click when I want a scrub.
B. Make a cron job to run that file every 2 weeks.
I got stuck on part A thus far and never even started on part B.
However, my questions in this thread are 2 fold:
1. How do I fix my project so it works.
2. Should I scrap the idea entirely and do something else that actually will work in achieving the goal of automatic scrub every 2 weeks. If so, how and what.
It's about Solaris10. I didn't use RBAC in s11 but as I can see there is no built-in Primary Administrator profile. I think that you can create profile approprite for your needs for example just with zfs command.
There is profile related to filesystem management.
You can try if they can meet your needs.
I have tested and it looks ok for creating zfs filesystem.
Another edit
ZFS File System Management works fine for zfs command but for zpool command you should use different profile:
And then zpool scrub works fine too
I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL).
Using the following commands below I have successfully mounted the image file ready to be opened by zpool
sudo losetup /dev/loop0 .img sudo... (1 Reply)
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this?
... (0 Replies)
I messed up my pool by doing zfs send...recive So I got the following :
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 928G 17.3G 911G 1% 1.00x ONLINE -
tank1 928G 35.8G 892G 3% 1.00x ONLINE -
So I have "tank1" pool.
zfs get all... (8 Replies)
installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Other than export/import, is there a cleaner way to rename a pool without unmounting de FS?
Something like, say "zpool rename a b"?
Thanks. (2 Replies)
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
Hi all
I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:
- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live... (1 Reply)