"I regularly run and post various benchmarks comparing POHMELFS, NFS, XFS and Ext4, [the] main goal of POHMELFS at this stage is to be essentially as fast as [the] underlying local filesystem. And it is..." explained Evgeniy Polyakov, suggesting that the POHMELFS networking filesystem performs 10% to 300% faster than NFS, depending on the file operation. In particular, he noted that it still suffers from random reads, an area that he's currently focused on fixing. He summarized the new features found in the latest release:
"Read request (data read, directory listing, lookup requests) balancing between multiple servers; write requests are sent to multiple servers and completed only when all of them send an ack; [the] ability to add and/or remove servers from [the] working set at run-time from userspace; documentation (overall view and protocol commands); rename command; several new mount options to control client behaviour instead of hard coded numbers."
Looking forward, Evgeniy noted that this was likely the last non-bugfix release of the kernel client side implementation, suggesting that the next release would focus on adding server side features, "needed for distributed parallel data processing (like the ability to add new servers via network commands from another server), so most of the work will be devoted to server code."
"This is a high performance network filesystem with a local coherent cache of data and metadata. Its main goal is distributed parallel processing of data," Evgeniy Polyakov said, announcing the latest version of his Parallel Optimized Host Message Exchange Layered File System. He noted that in addition to numerous bugfixes, the latest release includes the following new features:
"Full transaction support for all operations (object creation/removal, data reading and writing); Data and metadata cache coherency support; Transaction timeout based resending, if [a] given transaction did not receive [a] reply after specified timeout, [the] transaction will be resent (possibly to different server); Switched writepage path to ->sendpage() which improved performance and robustness of the writing."
Evgeniy also noted that he has started working on support for parallel data processing, one of the key intended features of the filesystem. He explained that initial logic has been added so data can be written to multiple servers at the same time, and reads can be balanced across the multiple servers, though the logic is not yet being used by the filesystem.
"This is a high performance network filesystem with local coherent cache of data and metadata. Its main goal is distributed parallel processing of data. Network filesystem is a client transport. POHMELFS protocol was proven to be superior to NFS in lots (if not all, then it is in a roadmap) operations."
This latest release prompted Jeff Garzik to reply, "this continues to be a neat and interesting project :)" New features include fast transactions, round-robin failover, and near-wire limit performance. This adds to existing features which include a local coherent data and metadata cache, async processing of most events, and a fast and scalable multi threaded user space server. Planned features include a server extension to allow mirroring data across multiple devices, strong authentication, and possible data encryption when transferring data over the network. Evgeniy linked to several benchmarks in his blog.
"Ceph is a distributed network file system designed to provide excellent performance, reliability, and scalability with POSIX semantics. I periodically see frustration on this list with the lack of a scalable GPL distributed file system with sufficiently robust replication and failure recovery to run on commodity hardware, and would like to think that--with a little love--Ceph could fill that gap," announced Sage Weil on the Linux Kernel mailing list. Originally developed as the subject of his PhD thesis, he went on to list the features of the new filesystem, including POSIX semantics, scalability from a few nodes to thousands of nodes, support for petabytes of data, a highly available design with no signle points of failure, n-way replication of data across multiple nodes, automatic data rebalancing as nodes are added and removed, and a Fuse-based client. He noted that a lightweight kernel client is in progress, as is flexible snapshoting, quotas, and improved security. Sage compared Ceph to other similar filesystems:
"In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely on symmetric access by all clients to shared block devices, Ceph separates data and metadata management into independent server clusters, similar to Lustre. Unlike Lustre, however, metadata and storage nodes run entirely in userspace and require no special kernel support. Storage nodes utilize either a raw block device or large image file to store data objects, or can utilize an existing file system (XFS, etc.) for local object storage (currently with weakened safety semantics). File data is striped across storage nodes in large chunks to distribute workload and facilitate high throughputs. When storage nodes fail, data is re-replicated in a distributed fashion by the storage nodes themselves (with some coordination from a cluster monitor), making the system extremely efficient and scalable."