ERROR: ZFS pool <Pool_Name> does not support boot environments


 
Thread Tools Search this Thread
Operating Systems Solaris ERROR: ZFS pool <Pool_Name> does not support boot environments
# 1  
Old 03-08-2016
ERROR: ZFS pool <Pool_Name> does not support boot environments

Hello,

I am a newbie to the world of Solaris. so please ignore if I make any silly point.

Recently I was trying to migrate UFS file system to ZFS on Solaris 10 (x86 platform).

I have followed standard procedures/documents available online

_http://docs.oracle.com/cd/E19253-01/821-0438/ggeej/index.html

Steps:

- Create a zpool
- fire lucreate command

Error I am getting is :

Code:
ERROR: ZFS pool <rpool> does not support boot environments.

I have relabeled my disk to SMI label using "format -e" command after creating zpool. But still I am not able to pass through this lucreate step.


Details from my test is below.

Code:
bash 3.2# zpool create rpool c1t5d0

<<Used Format -e command to overwrite SMI label using format -e command>> 

bash-3.2# lucreate -c c0t0d0s0 -n newzfsBE -p rpool
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <c0t0d0s0>.
Creating initial configuration for primary boot environment <c0t0d0s0>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <c0t0d0s0> PBE Boot Device </dev/dsk/c0t0d0s0>.
ERROR: ZFS pool <rpool> does not support boot environments
WARNING: The boot environment definition file </etc/lutab> was removed because the PBE was not successfully defined and created.

NOTE: I am performing this test on a VM (solaris 10) created on virtualbox.


Kindly suggest what else can be checked or done.


Thanks

Last edited by Scrutinizer; 03-08-2016 at 04:43 PM.. Reason: code tags
# 2  
Old 03-08-2016
https://docs.oracle.com/cd/E19253-01...joc/index.html

Quote:
Disks used for the root pool must have a VTOC (SMI) label, and the pool must be created with disk slices.
Your config:
Code:
bash 3.2# zpool create rpool c1t5d0xx

This User Gave Thanks to DukeNuke2 For This Post:
# 3  
Old 03-09-2016
After creating zpool I have created SMI label on top pf the disk I used for my pool i.e. c1t5d0 (Overwriting EFI label created while creating zpool)

Is there anything which I missed ?
# 4  
Old 03-09-2016
Did you read the link? Have you tried to create the zpool on a slice?
This User Gave Thanks to DukeNuke2 For This Post:
# 5  
Old 03-09-2016
On the first go it was not clear to me that rpool needs to be created on a slice..

After creating my pool on a slice my issue is Resolved.!

Thanks DukeNuke for your help. Smilie Smilie
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL). Using the following commands below I have successfully mounted the image file ready to be opened by zpool sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies

2. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies

3. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

4. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

7. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

8. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

9. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

10. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
Login or Register to Ask a Question