Sponsored Content
Top Forums UNIX for Dummies Questions & Answers File comparison of huge files Post 302588084 by Corona688 on Friday 6th of January 2012 04:15:32 PM
Old 01-06-2012
Quote:
Originally Posted by kaaliakahn
Thanks ctsgnb.


Could you please explain why do you need to sort?
It's the simplest and most reliable way to tell whether a line's a duplicate or not. sort is actually pretty sophisticated, capable of handling extremely huge files without bogging down. Once it's done so, any duplicate lines will show up all in a row, letting you use uniq to get rid of them.
Quote:
Thirdly please explain why use mknod?
He's actually making a named pipe. On many systems you can just use mkfifo for that.

A named pipe lets you do this:

Code:
mkfifo fifo
cat file > fifo &
cat fifo # the first cat writes to the fifo, the second cat reads from it.
# This is effectively cat file | cat.
# Named pipes can be a handy way to join together programs which
# read from filenames and can't easily handle stdin/stdout communication.

This User Gave Thanks to Corona688 For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

2. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

3. Shell Programming and Scripting

Huge File Comparison

Hi i need to compare two fixed length files and produce the differences if any to a seperate file. I have to capture each and every differneces line by line. Ideally my files should not have any differences but if there are any then it should be captured without any miss. Also my files sizes are... (4 Replies)
Discussion started by: naveenn08
4 Replies

4. Shell Programming and Scripting

Splitting the Huge file into several files...

Hi I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as: 6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Discussion started by: lakteja
3 Replies

5. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

6. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

7. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

8. UNIX for Dummies Questions & Answers

Split a huge 7 GB File Based on Pattern into 4 files

Hi, I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each. Please help me as Split command cannot work here as it might miss tags.. Format of the file is as below <!--###### ###### START-->... (6 Replies)
Discussion started by: KishM
6 Replies

9. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies

10. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
CAT(1)							      General Commands Manual							    CAT(1)

NAME
cat, read, nobs - catenate files SYNOPSIS
cat [ file ... ] read [ -m ] [ -n nline ] [ file ... ] nobs [ file ... ] DESCRIPTION
Cat reads each file in sequence and writes it on the standard output. Thus cat file prints a file and cat file1 file2 >file3 concatenates the first two files and places the result on the third. If no file is given, cat reads from the standard input. Output is buffered in blocks matching the input. Read copies to standard output exactly one line from the named file, default standard input. It is useful in interactive rc(1) scripts. The -m flag causes it to continue reading and writing multiple lines until end of file; -n causes it to read no more than nline lines. Read always executes a single write for each line of input, which can be helpful when preparing input to programs that expect line-at-a- time data. It never reads any more data from the input than it prints to the output. Nobs copies the named files to standard output except that it removes all backspace characters and the characters that precede them. It is useful to use as $PAGER with the Unix version of man(1) when run inside a win (see acme(1)) window. SOURCE
/src/cmd/cat.c /src/cmd/read.c /bin/nobs SEE ALSO
cp(1) DIAGNOSTICS
Read exits with status eof on end of file or, in the -n case, if it doesn't read nlines lines. BUGS
Beware of and which destroy input files before reading them. CAT(1)
All times are GMT -4. The time now is 03:23 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy