"After another round of performance tuning HAMMER all my benchmarks show HAMMER within 10% of UFS's performance, and it beats the shit out of UFS in certain tests such as file creation and random write performance," noted DragonFly BSD creator Matthew Dillon, providing an update on his new clustering filesystem. He continued, "read performance is good but drops more then UFS under heavy write loads (but write performance is much better at the same time)." He then referred to the blogbench benchmark noting, "now when UFS gets past blog #300 and blows out the system caches, UFS's write performance goes completely to hell but it is able to maintain good read performance." Matthew then compared this to HAMMER:
"HAMMER is the opposite. It can maintain fairly good write performance long after the system caches have been blown out, but read performance drops to about the same as its write performance (remember, this is blogbench doing reads from random files). Here HAMMER's read performance drops significantly but it is able to maintain write performance. UFS's write performance basically comes to a dead halt. However, HAMMER's performance numbers become 'unstable' once the system caches are blown out."
In a series of seven patches, Arnd Bergmann proposed adding in-memory write support to mounted cramfs file systems. He explained, "the intention is to use it for instance on read-only root file systems like CD-ROM, or on compressed initrd images. In either case, no data is written back to the medium, but remains in the page/inode/dentry cache, like ramfs does." Reactions were mixed. When Arnd suggested this as an alternative to using the more complex unionfs to overlay a temporary filesystem over a read-only file system, and that similar support could be added to other file systems, it was pointed out that there was ultimately more gained by focusing on a single solution that worked with all filesystems. David Newall stressed, "multiple implementations is a recipe for bugs and feature mismatch." Erez Zadok suggested, "I favor a more generic approach, one that will work with the vast majority of file systems that people use w/ unioning, preferably all of them." He went on to add that more gains would be had from modifying the union destination filesystem rather than multiple source filesystems. Arnd agreed in principle, but noted it would add complexity. He indicated that he'd explore the idea further, then explained:
"My idea was to have it in cramfs, squashfs and iso9660 at most, I agree that doing it in even a single writable file system would add far too much complexity. I did not mean to start a fundamental discussion about how to do it the right way, just noticed that there are half a dozen implementations that have been around for years without getting close to inclusion in the mainline kernel, while a much simpler approach gives you sane semantics for a subset of users."
"This is a high performance network filesystem with a local coherent cache of data and metadata. Its main goal is distributed parallel processing of data," Evgeniy Polyakov said, announcing the latest version of his Parallel Optimized Host Message Exchange Layered File System. He noted that in addition to numerous bugfixes, the latest release includes the following new features:
"Full transaction support for all operations (object creation/removal, data reading and writing); Data and metadata cache coherency support; Transaction timeout based resending, if [a] given transaction did not receive [a] reply after specified timeout, [the] transaction will be resent (possibly to different server); Switched writepage path to ->sendpage() which improved performance and robustness of the writing."
Evgeniy also noted that he has started working on support for parallel data processing, one of the key intended features of the filesystem. He explained that initial logic has been added so data can be written to multiple servers at the same time, and reads can be balanced across the multiple servers, though the logic is not yet being used by the filesystem.
Pawel Dawidek first ported ZFS to FreeBSD from OpenSolaris in April of 2007. He continues to actively port new ZFS features from OpenSolaris, and focuses on improving overall ZFS stability. During the introduction to his talk at BSDCan, he explained that his goal was to offer an accessible view of ZFS internals. His discussion was broken into three sections, a review of the layers ZFS is built from and how they work together, a look at unique features found in ZFS and how they work internally, and a report on the current status of ZFS in FreeBSD.
The BSDCan website notes that Pawel is a FreeBSD committer, adding:
"In the FreeBSD project, he works mostly in the storage subsystems area (GEOM, file systems), security (disk encryption, opencrypto framework, IPsec, jails), but his code is also in many other parts of the system. Pawel currently lives in Warsaw, Poland, running his small company."
"This is a high performance network filesystem with local coherent cache of data and metadata. Its main goal is distributed parallel processing of data. Network filesystem is a client transport. POHMELFS protocol was proven to be superior to NFS in lots (if not all, then it is in a roadmap) operations."
This latest release prompted Jeff Garzik to reply, "this continues to be a neat and interesting project :)" New features include fast transactions, round-robin failover, and near-wire limit performance. This adds to existing features which include a local coherent data and metadata cache, async processing of most events, and a fast and scalable multi threaded user space server. Planned features include a server extension to allow mirroring data across multiple devices, strong authentication, and possible data encryption when transferring data over the network. Evgeniy linked to several benchmarks in his blog.
Matthew Dillon sent out a series of updates about his developing HAMMER filesystem, noting that he is currently focusing on the reblocking and pruning code, tracking down a number of bugs resulting in B-Tree corruption. He also noted that previously HAMMER was comprised of three components: B-Tree nodes, records, and data. In his latest cleanups, he has entirely removed the record structure, "this will seriously improve the performance of directory and inode access." This change did require an on-media format change, "I know I have said this before, but there's a very good chance that no more on-media changes will be made after this point. The official freeze of the on-media format will not occur until the 2.0 release, however."
Matt added, "HAMMER is stable enough now that I am able to run it on my LAN backup box. I'm using it to test that the snapshots work as expected as well as to test the long term effects of reblocking and pruning." He then cautioned:
"Please note that HAMMER is not ready for production use yet, there is still the filesystem-full handling to implement and much more serious testing of the reblocking and pruning code is required, not to mention the crash recovery code. I expect to find a few more bugs, but I'm really happy with the results so far."
"Btrfs v0.14 is now available for download," Chris Mason announced, adding, "please note the disk format has changed, and it is not compatible with older versions of Btrfs." The project has gained a new wiki home page on the kernel.org domain, where it is explained, "Btrfs is a new copy on write filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration. Initially developed by Oracle, Btrfs is licensed under the GPL and open for contribution from anyone." Regarding the latest release, Chris explained:
"v0.14 has a few performance fixes and closes some races that could have allowed corrupted metadata in v0.13. The major new feature is the ability to manage multiple devices under a single Btrfs mount. Raid0, raid1 and raid10 are supported. Even for single device filesystems, metadata is now duplicated by default. Checksums are verified after reads finish and duplicate copies are used if the checksums don't match."
Chris offered links to multi-device benchmarks summarizing, "in general these numbers show that Btrfs does a good job at scaling to this storage configuration, and that is it on par with both HW raid and MD." Looking forward, he concluded, "next up on the Btrfs todo list is finishing off the device removal and IO error handling code. After that I'll add more fine grained locking to the btrees."
"HAMMER is going to be a little unstable as I commit the crash recovery code," began DragonFly BSD creator Matthew Dillon, adding, "I'm about half way through it." He went on to list what's left for crash recovery to work with HAMMER, his new clustering filesystem, "I have to flush the undo buffers out before the meta-data buffers; then I have to flush the volume header so mount can see the updated undo info; then I have to flush out the meta-data buffers that the UNDO info refers to; and, finally, the mount code must scan the UNDO buffers and perform any required UNDOs." He continued:
"The idea being that if a crash occurs at any point in the above sequence, HAMMER will be able to run the UNDOs to undo any partially written meta-data. HAMMER would be able to do this at mount-time and it would probably take less then a second, so basically this gives us our instant crash-recovery feature."
Matt went on to add that as an advantage of significantly separating the front end VFS operations from the backend I/O it would now be possible to fix several stalls in the code, significantly improving HAMMER's performance.
"Who did the reverse-engineering, and how was it done? Please make us confident that we won't get our butts sued off or something."
Jörn Engel posted the sixth version of patches introducing his new LogFS filesystem for flash devices to the Linux kernel. He highlighted some areas of the code that need some more work, and cc'd the appropriate people for further review. Regarding LogFS itself, he noted that one of its big advantages compared to other solutions was improved mount time and reduced memory consumption compared to other solutions, "LogFS has an on-medium tree, fairly similar to Ext2 in structure, so mount times are O(1)." He went on to add that flash is becoming more and more common in standard PC hardware, explaining:
"Flash behaves significantly different to hard disks. In order to use flash, the current standard practice is to add an emulation layer and an old-fashioned hard disk filesystem. As can be expected, this is eating up some of the benefits flash can offer over hard disks. In principle it is possible to achieve better performance with a flash filesystem than with the current emulated approach. In practice our current flash filesystems are not even near that theoretical goal. LogFS in its current state is already closer."
"Here is a new flash file system developed by Nokia engineers with help from the University of Szeged. The new file-system is called UBIFS, which stands for UBI file system. UBI is the wear-leveling/ bad-block handling/volume management layer which is already in mainline (see drivers/mtd/ubi)," began Artem Bityutskiy. He explained that UBIFS is stable and "very close to being production ready", aiming to offer improved performance and scalability compared to JFFS2 by implementing write-back caching, and storing a file-system index rather than rebuilding it each time the media is mounted. The write-back cache implementation claims to offer around a 100 time improvement in write performance over JFFS2. Artem went on to note:
"UBIFS works on top of UBI, not on top of bare flash devices. It delegates crucial things like garbage-collection and bad eraseblock handling to UBI. One important thing to note is MLC NAND flashes which tend to have very small eraseblock lifetime - just few thousand erase-cycles (some have even about 3000 or less). This makes JFFS2 random wear-leveling algorithm to be not good enough. In opposite, UBI provides good wear-leveling based on saved erase-counters."
Matthew Dillon posted on update on his evolving HAMMER filesystem, noting that it "passes all standard filesystem stress tests and buildworld will run with a HAMMER /usr/obj". He also noted, "pruning and reblocking code is in and partially tested, but now needs more stringent testing; full historical access appears to be working but needs testing." He added, "there are two big-ticket and several little-ticket items left. HAMMER will officially go Alpha when the big-ticket items are done, and beta when we get a few of the little-ticket items done." The two "big-ticket" items left to be completed are UNDO crash recovery code, and handling for full filesystems. Matt summarized:
"I have no time frame for these items yet. It will depend on how quickly HAMMER moves to Alpha and Beta status. I will say, however, now that HAMMER's on-disk format has solidified, that I have a very precise understanding of the protocols that will be needed to accomplish fully cache coherent remote access for both replicated and non-replicated (remote mount style) access. And, as you know, fully coherent filesystem access across machines is going to be the basis for DragonFly's clustering across said machines. In summary, things are progressing very well."
"There are lots of things in the FS that need deep thought,and the perfect system to fully use the first 64k of a 1TB filesystem isn't quite at the top of my list right now."
"HAMMER won't be ready for sure (things take however long they take), but the hardest part is working and stable and I'm just down to garbage collection and crash recovery," noted Matthew Dillon, discussing the status of what is ultimately intended to be a highly available clustering filesystem. The upcoming DragonFlyBSD release this month was originally intended to be 2.0 with a beta quality HAMMER, but the decision was recently made to call the release 1.12 while HAMMER continues to stabilize. Matt continued, "HAMMER is really shaping up now. Here's what works now: all filesystem operations; all historical operations; all Pruning features". During the discussion, he was asked how he planned to support multi-master replication, in reply to which he began:
"My current plan is to use a quorum algorithm similar to the one I wrote for the backplane database years ago. But there are really two major (and very complex) pieces to the puzzle. Not only do we need a quorum algorithm, but we need a distributed cache coherency algorithm as well. With those two pieces individual machines will be able to proactively cache filesystem data and guarantee transactional consistency across the cluster."
"Work continues to progress well but I've hit a few snags," noted Matthew Dillon, referring to the ongoing development of his HAMMER filesystem. He began by highlighting a number of problems with the current design, then adding, "everything else now works, and works well, including and most especially the historical access features." He continued:
"I've come to the conclusion that I am going to have to make a fairly radical change to the on-disk structures to solve these problems. On the plus side, these changes will greatly simplify the filesystem topology and greatly reduce its complexity. On the negative side, recovering space will not be instantaneous and will basically require data to be copied from one part of the filesystem to another."
Matt detailed his solution, which included getting rid of the previously described clusters, super-clusters, A-lists, and per-cluster B-Tree's, "instead have just one global B-Tree for the entire filesystem, able to access any record anywhere in the filesystem", adding that the filesystem would be implemented "as one big huge circular FIFO, pretty much laid down linearly on disk, with a B-Tree to locate and access data." He detailed the many improvements, noting that this also makes it possible to provide efficient real-time mirroring. He concluded, "it will probably take a week or two to rewire the topology and another week or two to debug it. Despite the massive rewiring, the new model is much, MUCH simpler then the old, and all the B-Tree code is retained (just extended to operate across the entire filesystem instead of just within a single cluster)."