06-06-2002
Well, it's quite a quandry...
If you deleted them, for them to be recoverable to you, if almost surely too much trouble to get them back. You could search the raw disk for text strings to try, but no guarantees (if your admin will even let you).
Now - if you've done something naughty and don't want anyone to find out, it's a different story. Once again, for the end user (non-BOfH admin types) it's not worth it or they don't know how. But if you committed a crime, it's very very hard to completely erase all traces of that. The federal goverment is very determined to get your data out so it can use it against you in court. Even if overwritten many time, patterns can be found inbetween tracks on your HD using special hardware / software combos. You could try running a utility like "wipe" or "bcwipe" and running at least 50 passes with random data, then zero it out, then do the same with the slack space on your files (look for a tool called bmap for Linux, any other Unix, and I think you're SOL) then delete them. No guarantees, either...
Remember mafiaboy (the canadian kid that DDOS'ed CNN, Yahoo, and other big guys)? He threw his harddrive into a lake, where it sat for months. The feds still got data off of it.
Kevin Poulson? He encrypted his files anywhere between 2 and 5 times, then deleted them - the feds got some data out of it...
9 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi I am going to migrate our datawarehouse system from HP Tru 64 Unix to the Red Hat Linux.
Inside the box, it is running around 40 cron jobs; inside each cron job, it is calling other shell script files, and the shell script files may again call other shell script files or ctl files(for... (1 Reply)
Discussion started by: franksubramania
1 Replies
2. Shell Programming and Scripting
Hi everyone!!
I have a database table, which has file_name as one of its fields.
Example:
File_ID File_Name Directory Size
0001 UNO_1232 /apps/opt 234
0002 UNO_1234 /apps/opt 788
0003 UNO_1235 /apps/opt 897
0004 UNO_1236 /apps/opt 568
I have to... (3 Replies)
Discussion started by: ss3944
3 Replies
3. Shell Programming and Scripting
Hi,
Could someone please assist on a quick way of How to extract data from indexed files (ISAM files) maintained in an UNIX(AIX) server.The file data needs to be extracted in flat text file or CSV or excel format .
Usually we have programs in microfocus COBOL to extract data, but would like... (2 Replies)
Discussion started by: devina
2 Replies
4. UNIX for Dummies Questions & Answers
the sorting is based on name of file,
file size
modification time stamps o f file
it should dislay the output in the following format
"." and ".." enteries should be ignored
please give some idea how to do it (1 Reply)
Discussion started by: pappu kumar jha
1 Replies
5. Shell Programming and Scripting
Hi,
I am new to shell scripting.Please help me on this.I am using solaris 10 OS and shell i am using is
# echo $0
-sh
My requirement is i have source file say makefile.I need to extract files with extensions (.c |.cxx |.h |.hxx |.sc) from the makefile.after doing so i need to check whether... (13 Replies)
Discussion started by: muraliinfy04
13 Replies
6. Shell Programming and Scripting
I need a script file for backup (zip or tar or gz) of old log files in our unix server (causing the space problem). Could you please help me to create the zip or gz files for each log files in current directory and sub-directories also?
I found one command which is to create gz file for the... (4 Replies)
Discussion started by: Mallikgm
4 Replies
7. Shell Programming and Scripting
Hii,
Could someone help me to append string to the starting of all the filenames inside a directory but it should exclude .zip files and subdirectories.
Eg.
file1: test1.log
file2: test2.log
file3 test.zip
After running the script
file1: string_test1.log
file2: string_test2.log
file3:... (4 Replies)
Discussion started by: Ravi Kishore
4 Replies
8. Shell Programming and Scripting
Hi,
Very good wishes to all!
Please help to provide the shell script for generating the record counts in filed wise from the .csv file
My question:
Source file:
Field1 Field2 Field3
abc 12f sLm
1234 hjd 12d
Hyd 34
Chn
My target file should generate the .csv file with the... (14 Replies)
Discussion started by: Kirands
14 Replies
9. UNIX for Beginners Questions & Answers
i use the split command to split a one terabyte backup file into 10 chunks of 100 GB each. The files are split one after the other. While the files is being split, I will like to scp the files one after the other as soon as the previous one completes, from server A to Server B. Then on server B ,... (2 Replies)
Discussion started by: malaika
2 Replies
SRM(1) General Commands Manual SRM(1)
NAME
srm - secure remove (secure_deletion toolkit)
SYNOPSIS
srm [-d] [-f] [-l] [-l] [-r] [-v] [-z] files
DESCRIPTION
srm is designed to delete data on mediums in a secure manner which can not be recovered by thiefs, law enforcement or other threats. The
wipe algorythm is based on the paper "Secure Deletion of Data from Magnetic and Solid-State Memory" presented at the 6th Usenix Security
Symposium by Peter Gutmann, one of the leading civilian cryptographers.
The secure data deletion process of srm goes like this:
* 1 pass with 0xff
* 5 random passes. /dev/urandom is used for a secure RNG if available.
* 27 passes with special values defined by Peter Gutmann.
* 5 random passes. /dev/urandom is used for a secure RNG if available.
* Rename the file to a random value
* Truncate the file
As an additional measure of security, the file is opened in O_SYNC mode and after each pass an fsync() call is done. srm writes 32k blocks
for the purpose of speed, filling buffers of disk caches to force them to flush and overwriting old data which belonged to the file.
COMMANDLINE OPTIONS
-d ignore the two special dot files . and .. on the commandline. (so you can execute it like "srm -d .* *")
-f fast (and insecure mode): no /dev/urandom, no synchronize mode.
-l lessens the security. Only two passes are written: one mode with 0xff and a final mode random values.
-l -l for a second time lessons the security even more: only one random pass is written.
-r recursive mode, deletes all subdirectories.
-v verbose mode
-z wipes the last write with zeros instead of random data
LIMITATIONS
NFS Beware of NFS. You can't ensure you really completely wiped your data from the remote disks.
Raid Raid Systems use stripped disks and have got large caches. It's hard to wipe them.
swap, /tmp, etc.
Some of your data might have a temporary (deleted) copy somewhere on the disk. You should use sfill which comes with the
secure_deletion package to ensure to wipe also the free diskspace. However, If already a small file aquired a block with your pre-
cious data, no tool known to me can help you here. For a secure deletion of the swap space sswap is available.
BUGS
No bugs. There was never a bug in the secure_deletion package (in contrast to my other tools, whew, good luck ;-) Send me any that you
find. Patches are nice too :)
AUTHOR
van Hauser / THC <vh@thc.org>
DISTRIBUTION
The newest version of the secure_deletion package can be obtained from http://www.thc.org
srm and the secure_deletion package is (C) 1997-2003 by van Hauser / THC (vh@thc.org)
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
the Free Software Foundation; Version 2.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MER-
CHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
SEE ALSO
sfill (1), sswap (1), sdmem (1)
SRM(1)