Filesystems are either pretty simple or really complex. The old dos FAT filesystem just overwrote the first character of a file name with an omega, and so usually you could just undelete by having a utility that would change the name back, as long as nothing used the blocks.
Modern filesystems are an absolute wonder of spinning wheels inside of spinning wheels allocating ranges of blocks, and then doing bookkeeping to reorganize linked data structures as fast as an SSD can write or as efficiently as possible on spinning rust.
Some log structured filesystems can do special snapshotting either automatically like NetApp ontap or manually like zfs so that when you take a snapshot any further changes preserve a view of that snapshot at a point in time that you can treat like a special directory where you can cd to and copy back out data as it was as long as you have space. Windows supports this kind of functionality with the VSS API if the underlying FS tech supports it.
The downside to these approaches is that they tend to cause fragmentation, can cause a lot of extra space to be used (after all, if you delete a tb, it may be because you meant to and you needed to, so if you mean it, why hasn’t it gone away, etc.,) and are a lot of complexity that 99% of the time 99% of the people don’t want to think about it or pay for it (pay as in it’s slower, uses more space, the complexity leads to more failure modes, etc )
Regarding windows. Does NTFS support this? Like COW and other advanced stuff? Or am I doomed if the underlying FS is NTFS?
At work I have to use windows (11) and i always have the feeling file-related stuff (copying, moving, doing stuff in large git repos) takes a loong time compared to my own devices thst run linux with btrfs.
I’m not windows expert, but as far as I know, the way to get snapshots on NTFS is via VSS, which is usually going to work by making a block level snapshot that can be mounted independently and used read only. I don’t believe NTFS was designed or has been updated to implement the kind of filesystem features that I was describing.
And yes, NTFS is usually slower at a lot of day to day things. It’s very sophisticated in some respects, but it’s traditionally not strong at dealing with lots of small file operations across lots of files, something that Linux filesystems tend to be good at, especially as e.g. got was written to support kernel development and if there is something that would speed up git that required a filesystem change due to git showing a performance weakness, well, I believe that had precedent
Filesystems are either pretty simple or really complex. The old dos FAT filesystem just overwrote the first character of a file name with an omega, and so usually you could just undelete by having a utility that would change the name back, as long as nothing used the blocks.
Modern filesystems are an absolute wonder of spinning wheels inside of spinning wheels allocating ranges of blocks, and then doing bookkeeping to reorganize linked data structures as fast as an SSD can write or as efficiently as possible on spinning rust.
Some log structured filesystems can do special snapshotting either automatically like NetApp ontap or manually like zfs so that when you take a snapshot any further changes preserve a view of that snapshot at a point in time that you can treat like a special directory where you can cd to and copy back out data as it was as long as you have space. Windows supports this kind of functionality with the VSS API if the underlying FS tech supports it.
The downside to these approaches is that they tend to cause fragmentation, can cause a lot of extra space to be used (after all, if you delete a tb, it may be because you meant to and you needed to, so if you mean it, why hasn’t it gone away, etc.,) and are a lot of complexity that 99% of the time 99% of the people don’t want to think about it or pay for it (pay as in it’s slower, uses more space, the complexity leads to more failure modes, etc )
Regarding windows. Does NTFS support this? Like COW and other advanced stuff? Or am I doomed if the underlying FS is NTFS?
At work I have to use windows (11) and i always have the feeling file-related stuff (copying, moving, doing stuff in large git repos) takes a loong time compared to my own devices thst run linux with btrfs.
I’m not windows expert, but as far as I know, the way to get snapshots on NTFS is via VSS, which is usually going to work by making a block level snapshot that can be mounted independently and used read only. I don’t believe NTFS was designed or has been updated to implement the kind of filesystem features that I was describing.
And yes, NTFS is usually slower at a lot of day to day things. It’s very sophisticated in some respects, but it’s traditionally not strong at dealing with lots of small file operations across lots of files, something that Linux filesystems tend to be good at, especially as e.g. got was written to support kernel development and if there is something that would speed up git that required a filesystem change due to git showing a performance weakness, well, I believe that had precedent