RogueProeliator wrote:[ObGeekyTech: I switched zfs implementations, and somewhere during the upgrade the filesystem record size got reset from 8K (good for PostgreSQL) to the default of adaptive 128K (not good at all). This is not a problem you'll have on HFS+.
]
Out of curiosity, what sparked the desire for ZFS -- you using it to do snapshots or another feature that steered you in that direction?
The killer feature, for me, is integrity - the file system never needs "checking". Every few weeks, zfs goes around and scrubs
all the data and
knows if any of it went bad and fixes it (from a mirror disk). That file I stuck in ten years ago
will still be there, intact, when I ask for it - not probably, but certainly.
I regularly pull a disk off the mirror and send it offsite for backup. The previous disk comes back and re-integrates into the pool. Disaster recovery consists of installing zfs on some mac and attaching the offsite disk; it mounts right up. Essentially, the backup format
is the file system format; I could mount that disk on Linux or Solaris or anywhere else where openzfs is available.
Snapshots
are handy. Snapshot clones are
way handy - I can clone an old view of a filesystem while the original is up and working, make separate changes, and then (if I choose) swap it out for the original in constant time. (And still have the original to put back if I screwed that up...)
Zfs can stream snapshot deltas between pools while the source pool is mounted and in use. Last time I upgraded my server hardware, I installed a new set of disks, streamed all the zfs content over, verified that the new server worked fine, and the final downtime was <1 hour to make the switch.
There are risks and drawbacks, to be sure - this is an open source product, and Apple might screw it up one day. But then, my server still runs 10.8.5 and I don't feel like I'm missing anything terribly important, so...
Cheers
-- perry