HAMMER is really shaping up now. Here's what works now: * All filesystem operations * All historical operations * All Pruning features Here's what is left: * freemap code (allocate and free big-blocks, which are 8MB blocks). Currently a hack so everything else can be tested, nothing is actually freed. * undo fifo and related recovery code. Most of the API calls are in place, the back-end buffer reservation, flushes, and recovery need to be implemented. * big-block cleaning code (this is different from the pruning code). * Structural locking. The B-Tree is fine-grained locked but the locks for the blockmap are just a hack (one big lock). These are all fairly low difficulty items, most of the infrastructure needed to support their function is already in place and the FIFO infrastructure has already been tested (just not mapped onto a blockmap yet). I have already run some tests with regards to the blockmap allocation model and it looks very good. What I did was implement an array of blockmap entry structures rather then just an array of pointers to the actual physical big-blocks. The blockmap entry structure not only has a pointer to the underlying physical big-block, it also has a bytes_free field which specifies how many bytes in the underlying big-block are free. This is the only tracking done by the blockmap. It does not actually try to track WHERE in the big-block the free areas are... figuring that out will be up to the cleaning code. What this gives us is the following: * Extremely fast freeing of on-disk storage elements. The target physical block doesn't have to be read or written, only the governing blockmap entry. With 8MB big-blocks and 32-byte blockmap entries one 16K buffer can track 4GB worth of underlying storage, which means that freeing large amounts of sparse ...
I'm fond of telling political hotheads (which you once were, but no longer ;o) that, before they destroy the system devised by their predecessors, they owe it to themselves to stop and find out exactly what problems their predecessors thought they were solving when they invented the existing system. So -- in this instance you are both the establishment and the revolutionary at the same time. Could you explain in extra-bonehead language what problems you were solving with the cluster model, and if you are still solving those problems with the newer model? Thanks!
Have you decided how to implement multi-master replication yet? -- Jason Smethers
*snip* Struggling today with a situation wherein 82 Giga-bytes of data were moved into an IMAP trash folder on UFS2, outrunning inodes, names et al before disk space (plenty of that left) and a cleanup that gets: /bin/rm: Argument list too long Unless I script it into manageable chunks.... If HAMMER fs has a better, even mechanism - even a rahter BFBI one, to handle that sort of need for massive deletions, it will make a convert here. Bill
Well, unless someone implements some kernel-space globbing support (which would be nice for other reasons, alas..) then you may still be stuck with similar issues. of course, ls *match* | xargs -n 100 rm or something along those lines is a perfectly good way of doing it.. Adrian -- Adrian Chadd - email@example.com
*snip* Per my off-list note, have gotten much of it cleaned up with scripting et al. But the larger issue is indeed expanded globbing support *somewhere*. And not just for new file systems, but old ones that have long-since outgrown the currently available resources. Massive drive and name space isn't a lot of use if we are short matching utils to manage it well. Bill