Sunday, November 19, 2006

Solaris 10 filesystem encryption

Update: please note that this was written a decade ago, and described what was essentially a hack that could be used to achieve slow storage encryption on Solaris 10. I do not recommend using this approach now or in the future.

Like many people today, I wanted to set up filesystem encryption on Solaris 10. It didn't need to be fast (just as well, since hardware encryption is currently out of the question). At the minimum, I wanted Data At Rest (DAR) storage encryption supporting a filesystem layered on top. The requirements I came up with were:

1) no plaintext stored on disk
2) encryption must be transparent to applications
3) filesystem must provide full POSIX semantics
4) availability is less important than integrity
(if data is returned, it must be correct)

On Linux, this would be easy to do. Create a LUKS-format encrypted block device, map it through dm-crypt, then stick a filesystem on top with mkfs.  My laptop is actually protected this way already.

On Solaris, it should be this easy.  Encrypted filesystems are becoming increasingly necessary in Enterprise environments, and Enterprise computing shops are by far the biggest customers for Sun.  It will be this easy if Sun ever release the encrypted lofi drivers they've been talking about.  Unfortunately, it isn't currently this easy.

Considering the problem, the old CFS code from Matt Blaze sprang to mind as a possibility. It was years since I'd looked at this code - if it worked, it would make my encryption problems go away (and potentially cause my availability problems to start...). Unfortunately, some quick research determined that this wouldn't work, because the code (even if it was stable) doesn't support full POSIX semantics (e.g. chmod, chown, etc. don't work by design since the filesystem owner is the only user granted access).

Some web research failed to turn up a good alternative solution.

Further pondering. Perhaps I could use the CFS code to provide an encrypted file which I could then use as the backend for a filesystem through lofi? That might just work.

This left major concerns about data integrity. Using code that hasn't been maintained in 5 years and almost certainly has never been compiled on an Opteron with Solaris 10/amd64 by its author seems risky.  ZFS places integrity above performance (why else would they checksum every block?). zpools can also be constructed directly on top of files from another filesystem. Could ZFS on CFS be a match made in heaven?

It turns out that you can indeed create zpools on top of files encrypted with CFS. CFS has no largefiles support, so if you need the zpool to be larger than 2GB, you'll need to create multiple backing store files and add them as separate vdevs into the zpool.

This leaves us with a way to create POSIX filesystems on Solaris 10 with data integrity assured by ZFS and with all data 3DES encrypted through CFS.

If you're worried about exposing rpcbind or cfsd to the network, block the traffic using IP Filter (which is almost certainly installed on your Solaris 10 box already).

The caveat to all of this is that if ZFS does discover corruption, you've probably lost your data, so this is only going to work for storage of transient data - don't stick your Oracle 10g environment on this. Also, and potentially more limiting, even on modern hardware it's still slow as a dog: ~4MB/sec for 3DES encryption of sequential I/O on a 2 GHz Opteron.

HOWTO

1) download cfs-1.4.1 (original downloaded from: http://www.crypto.com/software/)
2) install it, configure it and mount a CFS filesystem as /crypt
3) within CFS, create as many 2GB files as you need to hold your data
4) create a zpool from these (zpool create -m none crypt /crypt/space/store0001.zpool /crypt/space/store0002.zpool ...)
5) create a filesystem on this (zfs create crypt/test; zfs set mountpoint=/test crypt/test)
6) play with your encrypted filesystem

For extra integrity, consider using RAIDZ when creating the pool.  That way, in the limited failure scenario of CFS corrupting one backing file, the data should still be recoverable.

No comments: