Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Rhinorhino on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Just curious

Status
Not open for further replies.

RandyRiegel

Programmer
Joined
Sep 29, 2003
Messages
256
Location
US
What would actually happen if I was in the root directory signed in as root and did a "rm -r *". I know this is supposed to delete everything but would it delete files needed to run the OS? If so would the system crash instantly? I know this isn't a tech question but I'm just curious if this has ever happened to anybody.

Randy
 
NOTE: when I say root directoy I mean root of the disk:

/

Randy
 
Try it, I'm sure it couldn't hurt.

Seriously though, it will delete everything. There may be some files with an "immutable" tag in their inode, but that's uncommon enough that it couldn't possible save you. If you do it you are officially fuqued.
 
that is what I figured... I know that linux doesn't care whether or not a file is even in use to delete it.

I only have one Linux computer here at home so didn't want to try it on it since it serves as my internet gateway/router. I may get an old computer out of closet just to try but then if I couldn't boot it back up I couldn't see what it didn't delete so it prob doesn't matter.

Randy
 
Hmm... There's a good chance that I'm remembering this completely wrong and this post is completely invalid.


I'm remembering that, on Linux, shared library code normally does not get swapped out and in; instead, the OS just "swaps in" the library file from the filesystem.

If that's the case, there's a good chance that, when [tt]/lib[/tt] is getting eaten up by [tt]rm[/tt], programs will start to crash when pieces of their code can no longer be swapped in.

That'd be the first "bad" thing I'd expect to be observable.
 
it would be the first thing observable.
Next would come the inevitable problem with the kernel trying to swap something into memory that's no longer there, followed by the computer crashing hard.
Then the raised eyebrows as it fails to boot because there's no kernel anymore.
 
Actually, you would not be able to just delete all files. The files that the systerm had open and were using you would get a resource busy most likely. However, you would lose enough of your system that you probably would not successfully boot again.

>---------------------------------------Lawrence Feldman
SR. QA. Engineer SNAP Appliance
lfeldman@snapappliance.com

 
No, it would remove the inode from the directory entries so there'd be no way to access the file with a new process, but any processes with an open file handle would still be able to read and write it. Once the last filehandle is closed the file's space will be freed.

To see this in action start ftp'ing a large file, then delete it while it is still running and watch the output of df or du. It will climb until the the file completes the download and then when the download finishes and the application closes the filehandle the disk usage will drop back down as the disk blocks are freed.

Files that are explicitly *locked* using flock, et al may issue an error, but the system doesn't do that for read only access.
 
Why would anybody even think about doing rm -r * ? The effects are fairly obvious!
 
I think it boils down to, "Does Unix provide any sort of safety net to protect you from yourself?" And the answer is, "Not if you're root". Moral: Only use root privs when you need them and always think before you do anything stupid.
 
> always think before you do anything stupid.

Thinking before you do anything non-stupid is optional, of course. ;-)
 
chipperMDW :)

Randy: With a live-CD you may boot from cd, and mount your 'rm r *' filesystem, to look what survived.

In most cases 'rm -rf' is used, -f to force.

I made the experience when zipping a lot of files with '*', that it didn't work, because the shell-expansion evaluates all files and handles them to the command, which may lead to a buffer-overrun.
But a '.' works.

So I recommend 'rm -rf /.' :)

seeking a job as java-programmer in Berlin:
 
> [tt]rm -rf /.[/tt]

Another good reason to do that: it'll get rid of those pesky dotfiles living in [tt]/[/tt] that'd survive a mere [tt]rm -rf *[/tt] from [tt]/[/tt].
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top