"I wasn't planning on releasing v0.12 yet, and it was supposed to have some initial support for multiple devices. But, I have made a number of performance fixes and small bug fixes, and I wanted to get them out there before the (destabilizing) work on multiple-devices took over," explained Chris Mason regarding the 0.12 release of his new btrfs filesytem. Btrfs was first announced in June of 2007, as an alpha-quality filesystem offering checksumming of all files and metadata, extent based file storage, efficient packing of small files, dynamic inode allocation, writable snapshots, object level mirroring and striping, and fast offline filesystem checks, among other features. The project's website explains, "Linux has a wealth of filesystems to choose from, but we are facing a number of challenges with scaling to the large storage subsystems that are becoming common in today's data centers. Filesystems need to scale in their ability to address and manage large storage, and also in their ability to detect, repair and tolerate errors in the data stored on disk." Regarding the latest release, Chris offered:
"So, here's v0.12. It comes with a shiny new disk format (sorry), but the gain is dramatically better random writes to existing files. In testing here, the random write phase of tiobench went from 1MB/s to 30MB/s. The fix was to change the way back references for file extents were hashed."
"The following patches have been in the -mm tree for a while, and I plan to push them to Linus when the 2.6.25 merge window opens," began Theodore Ts'o, offering the patches for review before they are merged. He explained that the patches introduce some of the final changes to the ext4 on-disk format, "ext4, shouldn't be deployed to production systems yet, although we do salute those who are willing to be guinea pigs and play with this code!" He continued:
"With this patch series, it is expected that [the] ext4 format should be settling down. We still have delayed allocation and online defrag which aren't quite ready to merge, but those shouldn't affect the on-disk format. I don't expect any other on-disk format changes to show up after this point, but I've been wrong before.... any such changes would have to have a Really Good Reason, though."
Chris Mason announced version 0.10 of his new Btrfs filesystem, listing the following new features, "explicit back references, online resizing (including shrinking), in place conversion from Ext3 to Btrfs, data=ordered support, mount options to disable data COW and checksumming, and barrier support for sata and IDE drives". He noted that the disk format in v0.10 has changed, and is not compatible with the v0.9 disk format. Regarding back reference support, Chris explained, "the core of this release is explicit back references for all metadata blocks, data extents, and directory items. These are a crucial building block for future features such as online fsck and migration between devices. The back references are verified during deletes, and the extent back references are checked by the existing offline fsck tool." He then detailed the new Ext3 to Btrfs conversion utility:
"The conversion program uses the copy on write nature of Btrfs to preserve the original Ext3 FS, sharing the data blocks between Btrfs and Ext3 metadata. Btrfs metadata is created inside the free space of the Ext3 filesystem, and it is possible to either make the conversion permanent (reclaiming the space used by Ext3) or roll back the conversion to the original Ext3 filesystem."
"This patch speeds up e2fsck on Ext3 significantly using a technique called Metaclustering," stated Abhishek Rai. In an earlier thread he quantified this claim, "this patch will help reduce full fsck time for ext3. I've seen 50-65% reduction in fsck time when using this patch on a near-full file system. With some fsck optimizations, this figure becomes 80%." Most criticism so far has been in regards to formatting issues with the patch preventing it from being easily tested, resolved in the latest postings. It was also cautioned that the patch affects a significant amount of ext3 code, and thus will require very heavy testing. Abhishek described how the patch offers its significant gains for e2fsck:
"Metaclustering refers to storing indirect blocks in clusters on a per-group basis instead of spreading them out along with the data blocks. This makes e2fsck faster since it can now read and verify all indirect blocks without much seeks. However, done naively it can affect IO performance, so we have built in some optimizations to prevent that from happening. Finally, the benefit in fsck performance is noticeable only when indirect block reads are the bottleneck which is not always the case, but quite frequently is, in the case of moderate to large disks with lot of data on them. However, when indirect block reads are not the bottleneck, e2fsck is generally quite fast anyway to warrant any performance improvements."
"HAMMER is progressing very well with only 3-4 big-ticket items left to do," noted DragonFlyBSD creator Matthew Dillon regarding the ongoing development of his highly available clustering filesystem, "I'm really happy with the progress I'm making". He listed "on-the-fly recovery, balancing, refactoring of the spike code, and retention policy scan" as the remaining items needing to be implemented. "everything else is now working and reasonably stable. Of the remaining items only the spike coding has any real algorithmic complexity. Recovery and balancing just require brute force and the physical record deletion. The retention policy scan needs is already coded and working (just not the scan itself)."
Matt then defined what he meant by 'spike', "basically, when a cluster (a 64MB block of the disk) fills up a 'spike' needs to be driven into that cluster's B-Tree in order to expand it into a new cluster. The spike basically forwards a portion of the B-Tree's key space to a new cluster." He added, "refactoring the spike code means doing a better job selecting the amount of key space the spike can represent." He noted that balancing refers to the act of balancing the B-Tree representation of the filesystem, "we want to slowly move physical data records from higher level clusters to lower level clusters, eventually winding up with a situation where the higher level clusters contain only spikes and lower level clusters are mostly full." Matt continued:
"Keep in mind that HAMMER is designed to handle very large filesystems... in particular, filesystems that are big enough that you don't actually fill them up under normal operation, or at least do not quickly fill them up and then quickly clean them out. The balancing code is expected to need upwards of a day (or longer) to slowly iron out storage inefficiencies. If a situation comes up where faster action is needed, then faster action can be taken. I intend to take advantage of the fact that most filesystems (and, really, any large filesystem), takes quite a while to actually become full."
A recent thread on the FreeBSD -current mailing list discussed the stability of ZFS on FreeBSD. Scott Long noted that ZFS requires proper tuning to be stable:
"I guess what makes me mad about ZFS is that it's all-or-nothing; either it works, or it crashes. It doesn't automatically recognize limits and make adjustments or sacrifices when it reaches those limits, it just crashes. Wanting multiple gigabytes of RAM for caching in order to optimize performance is great, but crashing when it doesn't get those multiple gigabytes of RAM is not so great, and it leaves a bad taste in my mouth about ZFS in general."
ZFS was committed in April of 2007 by Pawel Dawidek who notes that he is using ZFS quite successfully on all of his systems. He then cautioned, "of course all this doesn't mean ZFS works great on FreeBSD. No. It is still an experimental feature." In response to some negative comments about ZFS on FreeBSD, Pawel noted, "in my opinion people are panicing in this thread much more than ZFS:) Let try to think how we can warn people clearly about proper tunning and what proper tunning actually means. I think we should advise increasing KVA_PAGES on i386 and not only vm.kmem_size. We could also warn that running ZFS on 32bit systems is not generally recommended."
"HAMMER is progressing well. The filesystem basically works, but there are some major pieces missing such as, oh, the recovery code, and I still have a ton of issues to work through... the poor fs blows up when it runs out of space, for example, due to the horrible spike implementation I have right now," DragonFlyBSD creator Matthew Dillon stated. HAMMER is a new highly available clustering filesystem aimed to be of beta quality by the DragonFlyBSD 2.0 release later this month. Matt notes,
"It isn't stable yet but some major milestones have been achieved. I am able to cpdup, rm -rf, and perform historical queries on deleted data."
Matt went on to caution, "please note that HAMMER is *NOT* yet ready for wider testing. Please don't start reporting bugs yet, because there are still tons of things for me to work through."
"HAMMER work is still progressing well, I hope to have most of it working in a degenerate single-cluster (64MB filesystem) case by the end of next week. (cluster == 64MB block of the disk, not cluster as in clustering)," noted Matthew Dillon on the DragonFlyBSD mailing list. He continued, "gluing the per-cluster B-Tree's together for the multi-cluster case is turning out to be more of a headache and will probably take at least 2 weeks to get working. Some fairly sophisticated heuristics will be needed to avoid unnecessary copying between clusters." Matt went on to note that the next DragonFlyBSD release will likely be delayed a month:
"I may decide to move the 2.0 release to mid-January to give myself some more time. This is similar to what we did for 1.8. Also, I think a January release is better then a Christmas release because people get busy with christmas-like things. I want the filesystem to be at least beta quality as of the release and I don't think its possible to get it there by mid-December."
Miklos Szeredi posted a request for comments titled "fuse writable mmap design". He explained, "writable shared memory mappings for fuse are something I've been trying to implement forever. Now hopefully I've got it all worked out, it survives indefinitely with bash-shared-mapping and fsx-linux. And I'd like to solicit comments about the approach." He went on to describe the patch:
"fuse_writepage() allocates a new temporary page with GFP_NOFS|__GFP_HIGHMEM. It copies the contents of the original page, and queues a WRITE request to the userspace filesystem using this temp page. From the VM's point of view, the writeback is finished instantly: the page is removed from the radix trees, and the PageDirty and PageWriteback flags are cleared. The per-bdi writeback count is not decremented until the writeback truly completes. [...] On dirtying the page, fuse waits for a previous write to finish before proceeding. This makes sure, there can only be one temporary page used at a time for one cached page."
"Ceph is a distributed network file system designed to provide excellent performance, reliability, and scalability with POSIX semantics. I periodically see frustration on this list with the lack of a scalable GPL distributed file system with sufficiently robust replication and failure recovery to run on commodity hardware, and would like to think that--with a little love--Ceph could fill that gap," announced Sage Weil on the Linux Kernel mailing list. Originally developed as the subject of his PhD thesis, he went on to list the features of the new filesystem, including POSIX semantics, scalability from a few nodes to thousands of nodes, support for petabytes of data, a highly available design with no signle points of failure, n-way replication of data across multiple nodes, automatic data rebalancing as nodes are added and removed, and a Fuse-based client. He noted that a lightweight kernel client is in progress, as is flexible snapshoting, quotas, and improved security. Sage compared Ceph to other similar filesystems:
"In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely on symmetric access by all clients to shared block devices, Ceph separates data and metadata management into independent server clusters, similar to Lustre. Unlike Lustre, however, metadata and storage nodes run entirely in userspace and require no special kernel support. Storage nodes utilize either a raw block device or large image file to store data objects, or can utilize an existing file system (XFS, etc.) for local object storage (currently with weakened safety semantics). File data is striped across storage nodes in large chunks to distribute workload and facilitate high throughputs. When storage nodes fail, data is re-replicated in a distributed fashion by the storage nodes themselves (with some coordination from a cluster monitor), making the system extremely efficient and scalable."
"I'm pleased to announce [the] 7'th and final release of the distributed storage subsystem (DST)," Evgeniy Polyakov stated, completing the TODO list on the project's web page. He titled the release, "squizzed black-out of the dancing back-aching hippo", noting, "it clearly shows my condition". New features in this release include checksum support, extended auto-configuration for detecting and auto-enabling checksums if supported by the remote host, new sysfs files for marking a given node as clean (in-sync) or dirty (not-in-sync), and numerous bug fixes.
Evgeniy released the first version of his distributed storage subsystem in July of 2007. In September he explained that this was the first step in a larger distributed filesystem project he's planning. In late October, Andrew Morton noted that the work looked ready to be merged into his -mm kernel.
"I'm pleased to announce another release of Squashfs. This is the 22nd release in just over five years. Squashfs 3.3 has lots of nice improvements, both to the filesystem itself (bigger blocks and sparse files), but also to the Squashfs-tools Mksquashfs and Unsquashfs," stated Phillip Lougher about the latest release of the compressed read-only Linux filesystem. He noted that he still needed to fix filesystem endianness, then he was going to focus on getting Squashfs into the mainline kernel. New features found in this latest release include:
"1. Maximum block size has been increased to 1Mbyte, and the default block size has been increased to 128 Kbytes. This improves compression.
"2. Sparse files are now supported. Sparse files are files which have large areas of unallocated data commonly called holes. These files are now detected by Squashfs and stored more efficiently. This improves compression and read performance for sparse files."
"Speaking of on-disk B-Trees, ReiserFS' biggest problems are all based on its use of flexible B-Trees," suggested a reader on the DragonFlyBSD Kernel mailing list, pointing to the difficulty of detecting a failed node and then of rebuilding the B-Tree. HAMMER filesystem designer and author, Matt Dillon, explained, "if a cluster needs to be recovered, HAMMER will simply throw away the B-Tree and regenerate it from scratch using the cluster's record list. This way all B-Tree I/O operations can be asynchronous and do not have to be flushed on fsync. At the same time HAMMER will remove any records whose creation transaction id's are too large (i.e. not synchronized with the cluster header), and will zero out the delete transaction id for any records whos deletion transaction id's are too large." Matt then acknowledged:
"The real performance issue for HAMMER is going to be B-Tree insertions and rebalancing across clusters. I think most of the issues can be resolved with appropriate heuristics and by a background process to slowly rebalance clusters. This will require a lot of work, though, and only minimal rebalancing will be in [the end-of-the-year] release."
"I will be continuing to commit bits and pieces of HAMMER, but note that it will probably not even begin to work for quite some time," Matthew Dillon reported on the new clustering filesystem he's developing for DragonFlyBSD. He noted, "I am still on track for it to make it into the end-of-year release." Matt continued:
"My B-Tree implementation also allows HAMMER to cache B-Tree nodes and start lookups from any internal node rather then having to start at the root. You can do this in a standard B-Tree too but it isn't necessarily efficient for certain boundary cases. In my implementation I store boundaries for the left AND right side which means a search starting in the middle of the tree knows exactly where to go and will never have to retrace its steps."
Andrew Morton responded favorably to Evgeniy Polyakov's most recent release of his distributed storage subsystem, "I went back and re-read last month's discussion and I'm not seeing any reason why we shouldn't start thinking about merging this." He then asked, "how close is it to that stage? A peek at your development blog indicates that things are still changing at a moderate rate?" Evgeniy replied:
"I completed storage layer development itself, the only remaining todo item is to implement [a] new redundancy algorithm, but I did not see major demand on that, so it will stay for now with low priority. I will use DST as a transport layer for [a] distributed filesystem, and probably that will require additional features, I have no clean design so far, but right now I have nothing in the pipe to commit to DST."