Another comment:
A con against zfs is the inability to remove VDEVs. A VDEV is a subpart of a volume.
Example:
Say you have a data volume consisting of a single disk(=vdev, 1 TB). You decide to replace your vdev of a single disk with a raid-1 vdev(1 TB), since you want add redundancy to be safe in case of a disk crash. That's possible. Over the years, you add another 2 vdevs(2x2TB,2x4TB) with raid-1 arrays. So you then have 3 vdevs making up your volume consisting of 2 disks each with an overall capacity of 7 TB.
You now decide you want to increase your storage again and simultaneously reorganize your 3 x raid1(6 disks=>7 TB usable) to 1 x raidz2(5x6 TB =>18 TB usable) to be able to cope with more simulateous disk crashes(2 disk crashes without data loss here) and at the same time reduce the number of active disks(6->5).
With zfs this is only possible by reformatting, since device removal is not fully supported by now. So you have to copy all the data, which must be done offline. ZFS top level device removal is in development at the moment, but i expect some years to pass until even raidz vdevs can be removed.
With LVM you can just add the new underlying disks and remove the old disks. No problem. All is possible to be done online. Btrfs can do that to and is even flexible to do more advanced migrations.
And Here are some experience reports about btrfs and zfs from users:
ZFS Vs BTRFS : linux
Some not to long gone data loss stories about btrfs are also there. I assume the cause may be lacking knowledge about file system operation. But of course that's only a suspicion.