28

Digging into the new features in OpenZFS post-Linux migration

 4 years ago
source link: https://www.tuicool.com/articles/AR32umB
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

ZFS on Linux 0.8 (ZoL) brought tons of new features and performance improvements when it was released on May 23. They came after Delphix announced that it was migrating its own product to Linux back in March 2018. We'll go over some of the most exciting May features (like ZFS native encryption) here today.

For the full list—including both new features and performance improvements not covered here—you can visit the ZoL 0.8.0 release on Github. (Note that  ZoL 0.8.1 was released last week, but since ZFS on Linux follows  semantic versioning , it's a bugfix release only.)

Unfortunately for Ubuntu fans, these new features won't show up in Canonical's repositories for quite some time—October 2019's forthcoming interim release, Eoan Ermine, is still showing 0.7.12 in its repos. We can hope that Ubuntu 20.04 LTS (which has yet to be named) will incorporate the 0.8.x branch, but there's no official word so far; if you're running Ubuntu 18.04 (or later) and absolutely cannot wait, the widely-used Jonathon F PPA has 0.8.1 available . Debian has  0.8.0 in its experimental repo , Arch Linux has 0.8.1 in its  zfs-dkms AUR package , and Gentoo has 0.8.1 in testing at  sys-fs/zfs . Users of other Linux distributions can find instructions for building packages directly from master at  https://zfsonlinux.org/ .

That aforementioned Linux migration added Delphix's impressive array of OpenZFS developers to the large contingent already working on ZFS on Linux. In November, the FreeBSD project announced its acknowledgment of the new  de facto primacy of Linux as the flagship development platform for OpenZFS. FreeBSD did so by rebasing its own OpenZFS codebase on ZFS on Linux rather than Illumos. In even better news for BSD fans, the porting efforts necessary will be adopted into the main codebase of ZFS on Linux itself, with PRs being merged from  FreeBSD's new ZoL fork as work progresses.

The last few months have been extremely busy for ZFS on Linux—and by extension, the entire OpenZFS project. Historically, the majority of new OpenZFS development was done by employees working at Delphix, who in turn used Illumos as their platform of choice. From there, new code was ported relatively quickly to FreeBSD and somewhat more slowly to Linux.

But over the years, momentum built up for the ZFS on Linux project. The stream of improvements and bugfixes reversed course—almost all of the really exciting new features debuting in 0.8 originated in Linux, instead of being ported in from elsewhere.

New to ZFS?

If you're not sure what all this ZFS fuss is about, you may want to visit some past Ars Technica ZFS coverage:

Let's dig into the most important stuff.

ZFS native encryption

One of the most important new features in 0.8 is Native ZFS Encryption . Until now, ZFS users have relied on OS-provided encrypted filesystem layers either above or below ZFS. While this approach does work, it presented difficulties—encryption (GELI or LUKS) below the ZFS layer decreases ZFS' native ability to assure data safety. Meanwhile, encryption above the ZFS layer (GELI or LUKS volumes created on ZVOLs) makes ZFS native compression (which tends to increase both performance and usable storage space when enabled) impossible.

The utility of native encryption doesn't stop with better integration and ease-of-use for encrypted filesystems, though; the feature also comes with raw encrypted ZFSreplication. When you've encrypted a ZFS filesystem natively, it's possible to replicate that filesystem intact to a remote ZFS pool without ever decrypting (or decompressing) the data—and without the remote system ever needing to be in possession of the key that  can decrypt it.

This feature, in turn, means that one could use ZFS replication to keep an untrusted remote backup system up to date. This makes it impossible—even for an attacker who's got root and/or physical access on the remote system—to steal the data being backed up there.

ZFS device removal

Among the most common complaints of ZFS hobbyists is that, if you bobble a command to add new disks to an existing ZFS pool, you can't undo it. You're stuck with a pool that includes single-disk vdevs and has effectively no parity or redundancy.

In the past, the only mitigation was to attach more disks to the new single-disk vdevs, upgrading them to mirrors; this might not be so bad if you're working with a pool of mirrors in the first place. But it's cold comfort if your pool is based on RAIDz (striped) vdevs—or if you're just plain out of money and/or bays for new disks.

Beginning with 0.8.0, device removal is possible in a limited number of scenarios with a new zpool remove command. A word to the wise, however—device removal isn't trivial, and it shouldn't be done lightly. A pool which has devices removed ends up with what amounts to CNAMEs for the missing storage blocks; filesystem calls referencing blocks originally stored on the removed disks end up first looking for the original block, then being redirected to the blocks' new locations. This should have relatively little impact on a device mistakenly added and immediately removed, but it could have serious performance implications if used to remove devices with many thousands of used blocks.

TRIM support in ZFS

One of the longest-standing complaints about ZFS on Linux is its lack of TRIM support for SSDs. Without TRIM, the performance of an SSD degrades significantly over time —after several years of unTRIMmed hard use, an SSD can easily be down to 1/3 or less of its original performance.

If your point of comparison is conventional hard disks, this doesn't matter too much; a good SSD will typically have five or six times the throughput and 10,000 times the IOPS of even a very fast rust disk. So what's a measly 67% penalty among friends? But if you're banking on the system's as-provisioned performance, you're in trouble.

Luckily, 0.8 brings support for both manual and automatic TRIM to ZFS . Most users and administrators will want to use the  autotrim pool property to enable automatic, real-time TRIM support; extremely performance-sensitive systems with windows of less storage use may elect instead to schedule regular TRIM tasks during off hours with  zpool trim .

ZFS pool checkpoints

Checkpoints aren't as glamorous as the features we've already mentioned, but they can certainly save your bacon. Think of a checkpoint as something like a pool-wide snapshot. But where a snapshot preserves the state of a single dataset or ZVOL, a checkpoint preserves the state of the entire pool.

If you're about to enable a new feature flag that changes on-disk format (which would normally be irreversible), you might first zpool checkpoint the pool, allowing you to roll it back to the pre-upgrade condition. Checkpoints can also be used to roll back otherwise-irreversible dataset or zvol level operations, such as destroy. Accidentally  zfs destroy an entire dataset, when you only meant to destroy one of its snapshots? If you've got a checkpoint, you can roll that action back.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK