writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

322
active users

#filesystem

0 posts0 participants0 posts today
Continued thread

I have found that all of the "solutions" I've looked at are just locking you into some more specific ecosystem, so went back to the revolutionary idea of using the #filesystem I have my photos and videos in a folder structure on my laptop by year, trip.

I don't auto backup from my #iPhone or #Sonya6700 anymore, that really just synced a load of cruft I had to delete, or pay to store. I move photos I want to my laptop, where I adjust and edit them in #darktable / #rawtherapee / #digikam

🧵 2/4

#ReleaseWednesday Just pushed a new version of thi.ng/block-fs, now with additional multi-command CLI tooling to convert & bundle a local file system tree into a single block-based binary blob (e.g. for bundling assets, or distributing a virtual filesystem as part of a web app, or for snapshot testing, or as bridge for WASM interop etc.)

Also new, the main API now includes a `.readAsObjectURL()` method to wrap files as URLs to binary blobs with associated MIME types, thereby making it trivial to use the virtual filesystem for sourcing stored images and other assets for direct use in the browser...

(Ps. For more context see other recent announcement: mastodon.thi.ng/@toxi/11426498)

Linux 6.15’s exFAT file deletion performance boosted

A recent development in the upcoming Linux 6.15 kernel has been spotted, because there was a big improvement to the exFAT file system implementation in relation to how it deletes the files when the “discard” mount option is used. This improvement significantly saves time as a test file after the merge has been deleted in 1.6 seconds, compared to more than 4 minutes of the total time taken.

This pull request makes sure that, upon file deletion, it discards a group of contiguous clusters (that is, clusters that are next to each other) in batch instead of discarding them one by one. This was because in prior kernels, such as 6.14, “if the discard mount option is enabled, the file’s clusters are discarded when they are freed. Discarding clusters one by one will significantly reduce performance. Poor performance may cause soft lockup when lots of clusters are freed.”

The change has been introduced in commit a36e0ab. Since then, the pull request has been merged to the kernel and it will be integrated to the first release candidate of Linux 6.15. A simple performance benchmark has been verified with the following commands:

# truncate -s 80G /mnt/file# time rm /mnt/file

In detail, the performance of this filesystem without this commit is poor, totalling about 4 minutes and 46 seconds in real time, with 12 seconds of system time. In contrast to the patched kernel, it totals about 1 second in real time, with 17 milliseconds of system time.

It’s a huge improvement!

Image by diana.grytsku on Freepik

> https://github.com/tuxera/ntfs-3g/wiki/Manual#alternate-data-streams-ads

Wait, so #NTFS does some weird files-as-objects-with-slots but still untyped binary streams thing?

Damn, with that and #Transactional NTFS, it really *is* the closest thing to have implemented the #database #filesystem I wish for (which would be typed, of course).

Shame that got deprecated. (It'd still be lacking #integrity features too but damn, so close yet so far.)
GitHubManualNTFS-3G Safe Read/Write NTFS Driver. Contribute to tuxera/ntfs-3g development by creating an account on GitHub.

hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.

I like ZFS
but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).

I also have one
#proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.

I don't think there's been any change on the
#BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.

I'm open to
#LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.

#techPosting

#btrfs-progs 6.13 is out:

lore.kernel.org/all/2025021423

github.com/kdave/btrfs-progs/r

Some highlights:

mkfs:
* new option to enable compression
* updated summary (subvolumes, compression)

scrub:
* start: new option --limit to set the bandwidth limit for the duration of the run

btrfstune:
* add option to remove squota

other:
* a bit more optimized crc32c code

lore.kernel.orgBtrfs progs release 6.13 - David Sterba

In old cowboy #books I read as a kid there were people who had lots of #skills--saddling horses, caring for livestock, fixing fences, maintaining tools, cooking on the trail, harnessing oxen, repairing wagons and barrels, etc. They were always in demand and someone always needed what they were doing.

I think people who understand file systems, network protocols, a couple of coding languages, graphics, etc. are kind of like that, now.

EDIT: As @Shanmonster has reminded me, the physical cowboy skills don't stop being awesome just because lots of our world shifted to an information/service model. If you can code AND fix a fence... damn. You're killing it.

ZFS and PBS are simply awesome - and you can use it on BSD but also on Linux! It ensures your data integrity and provides you additional benefits with features like compression and deduplication. Imagine this disk space usage without it?!

Edit: s/PBS/ZFS/

#zfs#storage#linux

I deleted 10GB of video (a few big files) from iPhoto library yesterday and today deleted about 20GB of photos. My used volume space on the boot drive where the library lives did not reduce at all in either instance.

Yes, I also deleted them from the Recently Deleted folder in Photos app -- so they are _gone_. Also, I do have save photos fully locally (am not set to save space by pulling some of the data from iCloud).

I rebooted, still the same after.

What is going on here?

I want to free up data. I see there is 120GB "purgeable" on the volume.

What apfs fuckery is this?

#Apple#apfs#Mac