Another advantage of using RAID controllers if you have them: replacing a failed disk is usually really easy with today's RAID controllers:
Remove the failed disk, put in a new disk.
The ZFS features you wouldn't be using by using hardware RAID? You'd fail to be using the ZFS software RAID.
Use the hardware RAID if you have it.
---------- Post updated at 12:42 PM ---------- Previous update was at 12:31 PM ----------
If this is supposed to be a production server that will have a long life and have to go through multiple operating system upgrades and patches, you'll also want to use a total of four disks for two completely separate ZFS root pools. Put the four disks into two separate mirrored hardware RAID arrays.
See the man page for "beadm".
Also, read this:
https://blogs.oracle.com/orasysat/en...ng_solaris_111
One problem with that, in my experience: using just one root pool will result in a convoluted mess of ZFS snapshots and clones as your boot environments evolve over the life of the server. But if you always create a new boot environment in a ZFS pool that's separate from the ZFS pool the source boot environment resides on, there's no mess of ZFS snapshots and clones created.
Why would you want to use boot environments? Because you can create a new boot environment, patch and upgrade the new one while your server is still running, then simply boot to the new environment. And if it fails, you just reboot back to the old one.
It's a lot more reliable than "yum upgrade". Try reverting that if it doesn't work...