Not quite...
Run Caja as root and try and delete the files and I get the following error message in the console.
** (caja:5902): WARNING **: 16:56:17.701: Could not inhibit power management: GDBus.Error:org.freedesktop.DBus.Error.NameHasNoOwner: Name "org.gnome.SessionManager" does not exist
You can ignore that error. It occurs exactly because you're running Caja as root, as it cannot connect to your session manager and user dbus agent. All that warning means is that Caja could not connect to power management. (It only does so to ensure your machine does not go to sleep or hibernate while doing significant work via Caja, like copying files around.)
If you want, you can run
cd /media/yourself/ONE_TB sudo rm -rf .Trash-0 sudo rm -rf lost+found cdinstead. Obviously,
sudo rm -rf is the nuclear weapon among Linux commands, allowing you to render your system completely inoperable, but the above pattern –– changing the working directory to the one containing the offending items, and then running the command to delete each one with one file or directory name only, no paths, no slashes! –– makes it much safer. Assuming the
sudo rm commands are written correctly, the worst thing that can happen is that you completely delete a similarly named tree elsewhere (if and only if the
cd command failed or had a typo); in this case, these two are safe to run.
lost+found is a directory used to put orphan inodes and other similar oddities, when the filesystem is checked (using
e2fsck/
fsck). Just be careful, and re-check the command before hitting Enter.
The last "cd" alone switches back to your home directory, as you cannot unmount the device if you have a shell open with its working directory on that device (making such a mount "
active"). In Linux, and more generally in BSDs and Unix, the current working directory for each process is not based on the path, but the actual directory itself. The kernel itself basically keeps a file description open on the directory. This means that if you in one shell run "
cd ; mkdir example ; cd example" to create a subdirectory named "example" under your home directory, and then change the name of that subdirectory in another shell (
cd ; mv example foobar), the first shell still works just fine and does not notice its actual path has changed, and is able to access everything in the renamed directory. The only thing that will fail is "
cd $CWD" and similar commands that use the path
string. This is also why mounts stay active (and not unmountable) as long as you have a shell open in there. It also means that with the above
sudo rm pattern, if you check you are in the correct directory by using e.g. "
ls -laF" (to list the contents of the current working directory), nobody can do hidden tricks to cause you to delete the wrong things by the immediately following
sudo rm commands. In some other operating systems, a nefarious user also logged in might be able to do that by renaming the directory you have a shell open, replacing it with a symlink to some other directory containing the thing they want you to delete, because they track paths using the string, and not the actual inodes. This kind of race window is not possible when using file description based approaches, only when paths are used.
And that last one is the reason why I keep telling people that if they write code to traverse directories using
opendir()/
readdir()/
closedir instead of
nftw() or FTS family of BSD functions, they will almost certainly open up bugs that would allow similar bait-and-hook path-based trickery to work. The underlying file description based machinery grew the entire
ATFILE stuff, including functions like
openat(),
linkat(),
unlinkat() (actual modern syscall used to delete files and directories), and even
execveat(), to protect against path manipulation during operations. Simply put, these, including current working directory, don't care if the
name used to access the directory or any of the parent directories changes; they have a robust name-bypassing "hook" into the actual directory or file instead.
If you
do write your own directory traversal code, then you should definitely be using
openat() and
fstatat(), or your code IS vulnerable to path bait-and-switch attacks and bugs, including simple file renames within the same directory. (The trick with
opendir() in Linux is to use
/proc/self/fd/N where
N is the nonnegative file descriptor to a read-only descriptor opened by
openat() to the desired subdirectory.) A well-implemented directory traverser maintains a global set of device,inode tuples identifying each directory having already traversed, and while traversing each directory, a device,inode tuple for each file. It can still miss files renamed during the traversal (but usually does not, because renames-in-place tends to not reorder the directory contents), but that is acceptable; it will be able to deal with directory renaming during traversal, and not report both pre-rename and post-rename names and statistics for files that are renamed or moved during traversal. As you can see, that takes quite a lot of code, so it is better to rely on the
nftw() instead provided by the standard C library (POSIX feature) instead.
Apologies for the bit of a rant, but it
is pertinent to the discussion at hand. I bet not even DiTBho has done the traversal
correctly in their "lsprettysize" program, and I just hate shoddily done, known-vulnerable base utilities.