Hi HN,
Since my first posted introduction of ratarmount [0], 2 years have gone by and many features have been added.
To summarize, ratarmount enables working with archived contents exposed as a filesystem without the data having to be extracted to disk:
pip install ratarmount
ratarmount archive.tar mounted
ls -la mounted
I started this project after noticing the slowness of archivemount with large TAR files and wondering how this could be because the file contents exist at some offset in the archive file and it should not be difficult to read that data.
Turns out, that part was not difficult, however packaging everything nicely, adding tests, and adding many more formats and features such as union mounting and recursive mounting, are the things keeping me busy on this project until today.
Since the last Show HN, a libarchive, SquashFS, fsspec, and many more backends have been added, so that it now should be able to read every format that archivemount can and some more, and even read them remotely.
However, performance for any use case besides bzip2/gzip-compressed TARs may vary even though I did my best.Personally, I am using it view to packed folders with many small files that do not change anymore. I pack these folders because else copying to other hard drives takes much longer. I'm also using it when I want to avoid the command line. I have added ratarmount as a Caja user script for mounting via right-click. This way, I can mount an archive and then copy the contents to another drive to effectively do the extraction and copying in one step. Initially, I have also used it to train on the ImageNet TAR archive directly.
I probably should have released a 1.0.0 some years ago because I have kept the command line interface and even the index file format compatible as best as possible between the several 0.x versions already.
Some larger future features on my wishlist are:
- A new indexed_lz4 backend. This should be doable inside my indexed_bzip2 [1] / rapidgzip [2] backend library.
- A custom ZIP and SquashFS reader accelerated by rapidgzip and indexed_bzip2 to enable faster seeking inside large files inside those archives.
- I am eagerly awaiting the Linux Kernel FUSE BPF support [3], which might enable some further latency reductions for use cases with very small files / very small reads, at least in the case of working with uncompressed archives. I have done comparisons for such archives (100k images a 100 KiB) and noticed that direct access via the Python library ratarmountcore was roughly two times faster than access via ratarmount and FUSE. Maybe I'll even find the time to play around with the existing unmerged FUSE BPF patch set.
[0] https://news.ycombinator.com/item?id=30631387
[1] https://news.ycombinator.com/item?id=31875318
[2] https://news.ycombinator.com/item?id=37378411
[3] https://lwn.net/Articles/937433/
Comments URL: https://news.ycombinator.com/item?id=42017833
Points: 17
# Comments: 1
Login to add comment
Other posts in this group
Article URL: https://lisyarus.github.io/blog/posts/implementing-a-tiny-cpu-rasterizer.html
Commen
Article URL: https://googleprojectzero.blogspot.com/2024/10/from-naptime-to-big-sleep.html
Commen
Article URL: https://www.construction-physics.com/p/how-china-is-like-the-19th-century
Comments URL: