Confirmation is not very effective, except if you use the function rarely. If you use it a lot, confirming just goes into musle memory.
The “shit, i didn’t mean to do that” moment is really when concious thought kicks in again. That’s why undo is so great.
Filesystems are either pretty simple or really complex. The old dos FAT filesystem just overwrote the first character of a file name with an omega, and so usually you could just undelete by having a utility that would change the name back, as long as nothing used the blocks.
Modern filesystems are an absolute wonder of spinning wheels inside of spinning wheels allocating ranges of blocks, and then doing bookkeeping to reorganize linked data structures as fast as an SSD can write or as efficiently as possible on spinning rust.
Some log structured filesystems can do special snapshotting either automatically like NetApp ontap or manually like zfs so that when you take a snapshot any further changes preserve a view of that snapshot at a point in time that you can treat like a special directory where you can cd to and copy back out data as it was as long as you have space. Windows supports this kind of functionality with the VSS API if the underlying FS tech supports it.
The downside to these approaches is that they tend to cause fragmentation, can cause a lot of extra space to be used (after all, if you delete a tb, it may be because you meant to and you needed to, so if you mean it, why hasn’t it gone away, etc.,) and are a lot of complexity that 99% of the time 99% of the people don’t want to think about it or pay for it (pay as in it’s slower, uses more space, the complexity leads to more failure modes, etc )
Regarding windows. Does NTFS support this? Like COW and other advanced stuff? Or am I doomed if the underlying FS is NTFS?
At work I have to use windows (11) and i always have the feeling file-related stuff (copying, moving, doing stuff in large git repos) takes a loong time compared to my own devices thst run linux with btrfs.
I’m not windows expert, but as far as I know, the way to get snapshots on NTFS is via VSS, which is usually going to work by making a block level snapshot that can be mounted independently and used read only. I don’t believe NTFS was designed or has been updated to implement the kind of filesystem features that I was describing.
And yes, NTFS is usually slower at a lot of day to day things. It’s very sophisticated in some respects, but it’s traditionally not strong at dealing with lots of small file operations across lots of files, something that Linux filesystems tend to be good at, especially as e.g. got was written to support kernel development and if there is something that would speed up git that required a filesystem change due to git showing a performance weakness, well, I believe that had precedent
i think the better way would be to replace rm with something that just moves files to a trash bin like how graphical file managers do it.
if you were just pulling the data back off the disk, and you didnt notice it IMMEDIATELY or a background process is writing some data, it could still be corrupted.
there was something like that i had on win3.2 called like undel.exe or something, but same deal, often it was courupted somehow by the time i was recovering the data
I usually don’t think about it at all, but every now and then I’m struck by how terrifyingly destructive rm -r can be.
I’ll use it to delete some build files or whatever, then I’ll suddenly have a streak of paranoia and need to triple check that I’m actually deleting the right thing. It would be nice to have a “safe” option that made recovery trivial, then I could just toggle “safe” to be on by default.
Honestly, after re-reading my own comment, I’m considering just putting some stupid-simple wrapper around mv that moves files to a dedicated trash bin. I’ll just delete the trash bin every now and then…
-Proceeds to collect 300 GB of build files and scrapped virtual environments over the coming month-
My “trick” with this is to mv files I’m very sure I want to be “deleting” into /tmp . If it instantly turns out to be a mistake, I can pull it back. Else, it gets purged on reboot.
This is usually A-okay for my home server since it reboots so rarely! A desktop machine might give you a little less time to reconsider. But it at least solved the “trash is using 45% of my hard disk now” issue haha.
In the very worst case scenario there’s the “Drop everything and run photorec / testdisk” as a last resort!
Then can alias rm to echo Use trash instead! or something. You wanna build new habits, not co-opt rm, it could happen easily that you’re ssh’d into a system where your rm alias doesn’t exist or similar
This breaks the advice to never alias a standard command to do something radically different from its regular function.
Sure, go ahead and alias ls to have extra options like --color, but don’t alias rm to do nothing, or even rm -i (-i is interactive and prompts for each file).
Why? Because one day you’ll be logged into a different system that doesn’t have your cushioning alias and whoops, bye-bye files.
Now that you think about it, you thought that ls output looked weird, but that didn’t actually break anything.
As you suggest, yes, look into your OS’s trash option, but leave rm alone.
GNOME-derived systems can use gio trash fileglob (or gvfs-trash on older systems) to put things in the actual desktop trash receptacle.
KDE’s syntax sucks, but it’s kioclientX move fileglob trash:/ where X may or may not be present and is a version number of some kind.
You could set up a shell function or script that fixes that syntax and give it any name you like - as long as it doesn’t collide with a standard one. On that rare foreign system it won’t exist and everything will be fine.
You alias rm to do nothing. There is no danger of aliasing rm to echo. The only thing that’ll happen is nothing.
Or are you seriously suggesting that if you do this, you somehow get used to rm doing nothing? Like you’ll just start rm’ing randomly because you know it’ll echo? I mean, stupider things have happened, but… yeah
dumb question, how hard would it be to implement?
when most files are deleted, they aren’t removed from memory, just their indexes are.
how about rm just marks the index as discartable in case a new file needs space it can be saved there, but until then, rm can be reversed?
Sometimes distros will alias rm with the -i flag so it prompts for each file. An annoyance but makes you stop and think before continuing.
Confirmation is not very effective, except if you use the function rarely. If you use it a lot, confirming just goes into musle memory. The “shit, i didn’t mean to do that” moment is really when concious thought kicks in again. That’s why undo is so great.
Filesystems are either pretty simple or really complex. The old dos FAT filesystem just overwrote the first character of a file name with an omega, and so usually you could just undelete by having a utility that would change the name back, as long as nothing used the blocks.
Modern filesystems are an absolute wonder of spinning wheels inside of spinning wheels allocating ranges of blocks, and then doing bookkeeping to reorganize linked data structures as fast as an SSD can write or as efficiently as possible on spinning rust.
Some log structured filesystems can do special snapshotting either automatically like NetApp ontap or manually like zfs so that when you take a snapshot any further changes preserve a view of that snapshot at a point in time that you can treat like a special directory where you can cd to and copy back out data as it was as long as you have space. Windows supports this kind of functionality with the VSS API if the underlying FS tech supports it.
The downside to these approaches is that they tend to cause fragmentation, can cause a lot of extra space to be used (after all, if you delete a tb, it may be because you meant to and you needed to, so if you mean it, why hasn’t it gone away, etc.,) and are a lot of complexity that 99% of the time 99% of the people don’t want to think about it or pay for it (pay as in it’s slower, uses more space, the complexity leads to more failure modes, etc )
Regarding windows. Does NTFS support this? Like COW and other advanced stuff? Or am I doomed if the underlying FS is NTFS?
At work I have to use windows (11) and i always have the feeling file-related stuff (copying, moving, doing stuff in large git repos) takes a loong time compared to my own devices thst run linux with btrfs.
I’m not windows expert, but as far as I know, the way to get snapshots on NTFS is via VSS, which is usually going to work by making a block level snapshot that can be mounted independently and used read only. I don’t believe NTFS was designed or has been updated to implement the kind of filesystem features that I was describing.
And yes, NTFS is usually slower at a lot of day to day things. It’s very sophisticated in some respects, but it’s traditionally not strong at dealing with lots of small file operations across lots of files, something that Linux filesystems tend to be good at, especially as e.g. got was written to support kernel development and if there is something that would speed up git that required a filesystem change due to git showing a performance weakness, well, I believe that had precedent
i think the better way would be to replace rm with something that just moves files to a trash bin like how graphical file managers do it.
if you were just pulling the data back off the disk, and you didnt notice it IMMEDIATELY or a background process is writing some data, it could still be corrupted.
there was something like that i had on win3.2 called like undel.exe or something, but same deal, often it was courupted somehow by the time i was recovering the data
I usually don’t think about it at all, but every now and then I’m struck by how terrifyingly destructive
rm -r
can be.I’ll use it to delete some build files or whatever, then I’ll suddenly have a streak of paranoia and need to triple check that I’m actually deleting the right thing. It would be nice to have a “safe” option that made recovery trivial, then I could just toggle “safe” to be on by default.
I think one solution is (browseable) Snapshots
Honestly, after re-reading my own comment, I’m considering just putting some stupid-simple wrapper around
mv
that moves files to a dedicated trash bin. I’ll just delete the trash bin every now and then…-Proceeds to collect 300 GB of build files and scrapped virtual environments over the coming month-
My “trick” with this is to mv files I’m very sure I want to be “deleting” into
/tmp
. If it instantly turns out to be a mistake, I can pull it back. Else, it gets purged on reboot.This is usually A-okay for my home server since it reboots so rarely! A desktop machine might give you a little less time to reconsider. But it at least solved the “trash is using 45% of my hard disk now” issue haha.
In the very worst case scenario there’s the “Drop everything and run photorec / testdisk” as a last resort!
There are solutions already. Just use them instead of
rm
https://wiki.archlinux.org/title/Trash_management
Then can alias rm to
echo Use trash instead!
or something. You wanna build new habits, not co-opt rm, it could happen easily that you’re ssh’d into a system where your rm alias doesn’t exist or similarMy thought wasn’t to alias
rm
, but rather to make a function likermv <file>
that would move the file to a trash directory.But of course this already exists- thanks for pointing me to the resource:)
This breaks the advice to never alias a standard command to do something radically different from its regular function.
Sure, go ahead and alias
ls
to have extra options like--color
, but don’t aliasrm
to do nothing, or evenrm -i
(-i
is interactive and prompts for each file).Why? Because one day you’ll be logged into a different system that doesn’t have your cushioning alias and whoops, bye-bye files.
Now that you think about it, you thought that
ls
output looked weird, but that didn’t actually break anything.As you suggest, yes, look into your OS’s trash option, but leave
rm
alone.GNOME-derived systems can use
gio trash fileglob
(orgvfs-trash
on older systems) to put things in the actual desktop trash receptacle.KDE’s syntax sucks, but it’s
kioclientX move fileglob trash:/
whereX
may or may not be present and is a version number of some kind.You could set up a shell function or script that fixes that syntax and give it any name you like - as long as it doesn’t collide with a standard one. On that rare foreign system it won’t exist and everything will be fine.
You alias
rm
to do nothing. There is no danger of aliasing rm to echo. The only thing that’ll happen is nothing.Or are you seriously suggesting that if you do this, you somehow get used to
rm
doing nothing? Like you’ll just start rm’ing randomly because you know it’ll echo? I mean, stupider things have happened, but… yeahI admit that of the things
rm
could be aliased to do, it is one of the safer ones. It’s still bad practice in my book.