Sponsored Content
Top Forums Programming Open Source What is your favorite Linux distro? Post 303029322 by stomp on Thursday 24th of January 2019 06:20:53 PM
Old 01-24-2019
** Debian & Ubuntu **

- Painless Upgrades
- No surprises

** System Rescue CD **

- Excellent Rescue System(like it more than Knoppix or GRML)
- For Sysadmins
- Espececially for Rescue Tasks

** Knoppix **

- Best Live Distribution in Terms of GUI/End User Applications

** LEDE/OpenWRT **

- Gets you more out of your router
- Some Devices are available preinstalled with it(GL.inet: Really cheap! For Turris Omnia partly, because that's no pure OpenWRT)
- Can be difficult to find supported hardware(Did I say I prefer Hardware with that OS preinstalled? :-) )
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Linux distro

Hi I'm have old toshiba laptop(t1900) 486, 4mbRAM and ~120MB of hdd I'm looking for distro to suite my comp, no need for X windows but not enything that runs on FAT, just normal small Linux. Actually, *BSDs will do as well. If u know any distro that would do this I will be thankful for hint ... (4 Replies)
Discussion started by: wolk
4 Replies

2. What is on Your Mind?

Post Your Favorite UNIX/Linux Related RSS Feed Links

Hello, I am planning to revise the RSS News subforum areas, here: News, Links, Events and Announcements - The UNIX Forums ... maybe with a subforum for each OS specific news, like HP-UX, Solaris, RedHat, OSX, etc. RSS subforums.... Please post your favorite OS specific RSS (RSS2) link... (0 Replies)
Discussion started by: Neo
0 Replies

3. UNIX for Dummies Questions & Answers

New to linux. Which distro should i use?

want to know which Linux distro is 4 me. want 2 teach my self programing and problem solving. i want to learn code and write code. i have an acer aspire one 2GB memory 160 GB HDD intel Atom. look im as noobie as it gets im a MS xp, vista boy want to go beyond graphical click and do... any help... (1 Reply)
Discussion started by: BizilStank
1 Replies

4. What is on Your Mind?

What's your favorite SSH client to connect to UNIX/Linux machines?

I am curious about the most popular ssh client on Windows environment. Talking about me, I use PuTTY most of the time coupled with WinSCP to transfer files. But, I like Tera Term too. It has great drag-drop feature where you can drag a file/folder and drop on the window and it will transfer the... (14 Replies)
Discussion started by: admin_xor
14 Replies

5. What is on Your Mind?

What's your all time favorite UNIX/Linux book?

I can bet everyone has their one favorite book even though we have had read many books on UNIX or Linux. My all time favorite is "Unix Power Tools". This book always made me geeky and I loved the little tricks/tips in the book. I still do! The next favorite would be "Prentice Hall Unix and Linux... (0 Replies)
Discussion started by: admin_xor
0 Replies

6. Linux

Best Linux Distro

Hello, I have a Compaq Presario v3000 5 year old laptop, with 1 GB RAM and currently running the (slow and stupid) Windows 7 32 bit, thus I would like to dual boot it with an appropriate distro of Linux that 1) Doesnt consume too much resources (1 GB RAM is not a lot of space) and it ll be... (4 Replies)
Discussion started by: ajayram
4 Replies

7. Linux

Favorite Synchronizers for Win & Linux

I'm looking for a new file/directory synchronizer. I've been using unison because it works on both windows and linux. However, it often chokes on the very long directory paths and file names I encounter when backing up eclipse and eclipse workspace directories. I suppose one could argue that I... (2 Replies)
Discussion started by: siegfried
2 Replies

8. What is on Your Mind?

Video: What is Your Favorite Linux Distro? UNIX.com and Primis

Video: What is Your Favorite Linux Distro? UNIX.com and Primis https://youtu.be/doa9sA6q9Uw With so many great flavors of Linux to choose from, we asked our UNIX.com members what is their favorite Linux distro and why. Here are the results: What is your favorite Linux distro? ... (0 Replies)
Discussion started by: Neo
0 Replies

9. What is on Your Mind?

What is Your Favorite Editor for Linux and UNIX? | A Video in 1080 HD

We have asked UNIX.com users over the years what is their favorite editor and why. Here is the top three answers. Here is a new YT video on this question: What Editor Does Everyone Use? https://youtu.be/gqE8RTZZt9g Of course, vi was the overwhelming favorite. Credits: 1080 HD... (3 Replies)
Discussion started by: Neo
3 Replies
DUPEMAP(1)							   Magic Rescue 							DUPEMAP(1)

NAME
dupemap - Creates a database of file checksums and uses it to eliminate duplicates SYNOPSIS
dupemap [ options ] [ -d database ] operation path... DESCRIPTION
dupemap recursively scans each path to find checksums of file contents. Directories are searched through in no particular order. Its actions depend on whether the -d option is given, and on the operation parameter, which must be a comma-seperated list of scan, report, delete: Without -d dupemap will take action when it sees the same checksum repeated more than once, i.e. it simply finds duplicates recursively. The action depends on operation: report Report what files are encountered more than once, printing their names to standard output. delete[,report] Delete files that are encountered more than once. Print their names if report is also given. WARNING: use the report operation first to see what will be deleted. WARNING: You are advised to make a backup of the target first, e.g. with "cp -al" (for GNU cp) to create hard links recursively. With -d The database argument to -d will denote a database file (see the "DATABASE" section in this manual for details) to read from or write to. In this mode, the scan operation should be run on one path, followed by the report or delete operation on another (not the same!) path. scan Add the checksum of each file to database. This operation must be run initially to create the database. To start over, you must manually delete the database file(s) (see the "DATABASE" section). report Print each file name if its checksum is found in database. delete[,report] Delete each file if its checksum is found in database. If report is also present, print the name of each deleted file. WARNING: if you run dupemap delete on the same path you just ran dupemap scan on, it will delete every file! The idea of these options is to scan one path and delete files in a second path. WARNING: use the report operation first to see what will be deleted. WARNING: You are advised to make a backup of the target first, e.g. with "cp -al" (for GNU cp) to create hard links recursively. OPTIONS
-d database Use database as an on-disk database to read from or write to. See the "DESCRIPTION" section above about how this influences the operation of dupemap. -I file Reads input files from file in addition to those listed on the command line. If file is "-", read from standard input. Each line will be interpreted as a file name. The paths given here will NOT be scanned recursively. Directories will be ignored and symlinks will be followed. -m minsize Ignore files below this size. -M maxsize Ignore files above this size. USAGE
General usage The easiest operations to understand is when the -d option is not given. To delete all duplicate files in /tmp/recovered-files, do: $ dupemap delete /tmp/recovered-files Often, dupemap scan is run to produce a checksum database of all files in a directory tree. Then dupemap delete is run on another directory, possibly following dupemap report. For example, to delete all files in /tmp/recovered-files that already exist in $HOME, do this: $ dupemap -d homedir.map scan $HOME $ dupemap -d homedir.map delete,report /tmp/recovered-files Usage with magicrescue The main application for dupemap is to take some pain out of performing undelete operations with magicrescue(1). The reason is that magicrescue will extract every single file of the specified type on the block device, so undeleting files requires you to find a few files out of hundreds, which can take a long time if done manually. What we want to do is to only extract the documents that don't exist on the file system already. In the following scenario, you have accidentally deleted some important Word documents in Windows. If this were a real-world scenario, then by all means use The Sleuth Kit. However, magicrescue will work even when the directory entries were overwritten, i.e. more files were stored in the same folder later. You boot into Linux and change to a directory with lots of space. Mount the Windows partition, preferably read-only (especially with NTFS), and create the directories we will use. $ mount -o ro /dev/hda1 /mnt/windows $ mkdir healthy_docs rescued_docs Extract all the healthy Word documents with magicrescue and build a database of their checksums. It may seem a little redundant to send all the documents through magicrescue first, but the reason is that this process may modify them (e.g. stripping trailing garbage), and therefore their checksum will not be the same as the original documents. Also, it will find documents embedded inside other files, such as uncompressed zip archives or files with the wrong extension. $ find /mnt/windows -type f |magicrescue -I- -r msoffice -d healthy_docs $ dupemap -d healthy_docs.map scan healthy_docs $ rm -rf healthy_docs Now rescue all "msoffice" documents from the block device and get rid of everything that's not a *.doc. $ magicrescue -Mo -r msoffice -d rescued_docs /dev/hda1 |grep -v '.doc$'|xargs rm -f Remove all the rescued documents that also appear on the file system, and remove duplicates. $ dupemap -d healthy_docs.map delete,report rescued_docs $ dupemap delete,report rescued_docs The rescued_docs folder should now contain only a few files. This will be the undeleted files and some documents that were not stored in contiguous blocks (use that defragger ;-)). Usage with fsck In this scenario (based on a true story), you have a hard disk that's gone bad. You have managed to dd about 80% of the contents into the file diskimage, and you have an old backup from a few months ago. The disk is using reiserfs on Linux. First, use fsck to make the file system usable again. It will find many nameless files and put them in lost+found. You need to make sure there is some free space on the disk image, so fsck has something to work with. $ cp diskimage diskimage.bak $ dd if=/dev/zero bs=1M count=2048 >> diskimage $ reiserfsck --rebuild-tree diskimage $ mount -o loop diskimage /mnt $ ls /mnt/lost+found (tons of files) Our strategy will be to restore the system with the old backup as a base and merge the two other sets of files (/mnt/lost+found and /mnt) into the backup after eliminating duplicates. Therefore we create a checksum database of the directory we have unpacked the backup in. $ dupemap -d backup.map scan ~/backup Next, we eliminate all the files from the rescued image that are also present in the backup. $ dupemap -d backup.map delete,report /mnt We also want to remove duplicates from lost+found, and we want to get rid of any files that are also present in the other directories in /mnt. $ dupemap delete,report /mnt/lost+found $ ls /mnt|grep -v lost+found|xargs dupemap -d mnt.map scan $ dupemap -d mnt.map delete,report /mnt/lost+found This should leave only the files in /mnt that have changed since the last backup or got corrupted. Particularly, the contents of /mnt/lost+found should now be reduced enough to manually sort through them (or perhaps use magicsort(1)). Primitive intrusion detection You can use dupemap to see what files change on your system. This is one of the more exotic uses, and it's only included for inspiration. First, you map the whole file system. $ dupemap -d old.map scan / Then you come back a few days/weeks later and run dupemap report. This will give you a view of what has not changed. To see what has changed, you need a list of the whole file system. You can get this list along with preparing a new map easily. Both lists need to be sorted to be compared. $ dupemap -d old.map report /|sort > unchanged_files $ dupemap -d current.map scan /|sort > current_files All that's left to do is comparing these files and preparing for next week. This assumes that the dbm appends the ".db" extension to database files. $ diff unchanged_files current_files > changed_files $ mv current.map.db old.map.db DATABASE
The actual database file(s) written by dupecheck will have some relation to the database argument, but most implementations append an extension. For example, Berkeley DB names the files database.db, while Solaris and GDBM creates both a database.dir and database.pag file. dupecheck depends on a database library for storing the checksums. It currently requires the POSIX-standardized ndbm library, which must be present on XSI-compliant UNIXes. Implementations are not required to handle hash key collisions, and a faliure to do that could make dupecheck delete too many files. I haven't heard of such an implementation, though. The current checksum algorithm is the file's CRC32 combined with its size. Both values are stored in native byte order, and because of varying type sizes the database is not portable across architectures, compilers and operating systems. SEE ALSO
magicrescue(1), weeder(1) This tool does the same thing weeder does, except that weeder cannot seem to handle many files without crashing, and it has no largefile support. BUGS
There is a tiny chance that two different files can have the same checksum and size. The probability of this happening is around 1 to 10^14, and since dupemap is part of the Magic Rescue package, which deals with disaster recovery, that chance becomes an insignificant part of the game. You should consider this if you apply dupemap to other applications, especially if they are security-related (see next paragraph). It is possible to craft a file to have a known CRC32. You need to keep this in mind if you use dupemap on untrusted data. A solution to this could be to implement an option for using MD5 checksums instead. AUTHOR
Jonas Jensen <jbj@knef.dk> LATEST VERSION
This tool is part of Magic Rescue. You can find the latest version at <http://jbj.rapanden.dk/magicrescue/> 1.1.8 2008-06-26 DUPEMAP(1)
All times are GMT -4. The time now is 12:51 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy