Sponsored Content
Top Forums UNIX for Advanced & Expert Users Performance problem with removing duplicates in a huge file (50+ GB) Post 302752669 by Corona688 on Monday 7th of January 2013 11:16:00 AM
Old 01-07-2013
As is, the data is going to be very difficult to manage.

My suggestion would be to try transforming the data into something more suitable for sort -u, then transforming it back after.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

removing duplicates from a file

i have a file with some 1000 entries it will contain entries like 1000,ram 2000,pankaj 1001,rahim 1000,ram 2532,govind 2000,pankaj 3000,venkat 2532,govind what i want is i want to extract only the distinct rows from this file so my output should contain only 1000,ram... (2 Replies)
Discussion started by: trichyselva
2 Replies

2. UNIX for Dummies Questions & Answers

removing duplicates of a pattern from a file

hey all, I need some help. I have a text file with names in it. My target is that if a particular pattern exists in that file more than once..then i want to rename all the occurences of that pattern by alternate patterns.. for e.g if i have PATTERN occuring 5 times then i want to... (3 Replies)
Discussion started by: ashisharora
3 Replies

3. Shell Programming and Scripting

Removing duplicates from log file?

I have a log file with posts looking like this: -- Messages can be delivered by different systems at different times. The id number is used to sort out duplicate messages. What I need is to strip the arrival time from each post, sort posts by id number, and reattach arrival time to respective... (2 Replies)
Discussion started by: Ilja
2 Replies

4. Shell Programming and Scripting

Removing Duplicates from file

Hi Experts, Please check the following new requirement. I got data like the following in a file. FILE_HEADER 01cbbfde7898410| 3477945| home| 1 01cbc275d2c122| 3478234| WORK| 1 01cbbe4362743da| 3496386| Rich Spare| 1 01cbc275d2c122| 3478234| WORK| 1 This is pipe separated file with... (3 Replies)
Discussion started by: tinufarid
3 Replies

5. Shell Programming and Scripting

formatting a file and removing duplicates

Hi, I have a file that I want to change the format of. It is a large file in rows but I want it to be comma separated (comma then a space). The current file looks like this: HI, Joe, Bob, Jack, Jack After I would want to remove any duplicates so it would look like this: HI, Joe,... (2 Replies)
Discussion started by: kylle345
2 Replies

6. HP-UX

Performance issue with 'grep' command for huge file size

I have 2 files; one file (say, details.txt) contains the details of employees and another file (say, emp.txt) has some selected employee names. I am extracting employee details from details.txt by using emp.txt and the corresponding code is: while read line do emp_name=`echo $line` grep -e... (7 Replies)
Discussion started by: arb_1984
7 Replies

7. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

8. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3 (2 Replies)
Discussion started by: sagar_1986
2 Replies

9. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns. (3 Replies)
Discussion started by: sagar_1986
3 Replies

10. Shell Programming and Scripting

Removing White spaces from a huge file

I am trying to remove whitespaces from a file containing sample data as: 457 <EOFD> Mar 1 2007 12:00:00:000AM <EOFD> Mar 31 2007 12:00:00:000AM <EOFD> system <EORD> 458 <EOFD> Mar 1 2007 12:00:00:000AM<EOFD>agf <EOFD> Apr 20 2007 9:10:56:036PM <EOFD> prodiws<EORD> . Basically these... (11 Replies)
Discussion started by: amvip
11 Replies
RGB2YCBCR(1)						      General Commands Manual						      RGB2YCBCR(1)

NAME
rgb2ycbcr - convert non-YCbCr TIFF images to a YCbCr TIFF image SYNOPSIS
rgb2ycbcr [ options ] src1.tif src2.tif ... dst.tif DESCRIPTION
rgb2ycbcr converts RGB color, greyscale, or bi-level TIFF images to YCbCr images by transforming and sampling pixel data. If multiple files are specified on the command line each source file is converted to a separate directory in the destination file. By default, chrominance samples are created by sampling 2 by 2 blocks of luminance values; this can be changed with the -h and -v options. Output data are compressed with the LZW compression scheme, by default; an alternate scheme can be selected with the -c option. By default, output data are compressed in strips with the number of rows in each strip selected so that the size of a strip is never more than 8 kilobytes; the -r option can be used to explicitly set the number of rows per strip. OPTIONS
-c Specify a compression scheme to use when writing image data: -c none for no compression, -c packbits for the PackBits compression algorithm, -c jpeg for the JPEG compression algorithm, and -c lzw for Lempel-Ziv & Welch (the default). -h Set the horizontal sampling dimension to one of: 1, 2 (default), or 4. -r Write data with a specified number of rows per strip; by default the number of rows/strip is selected so that each strip is approxi- mately 8 kilobytes. -v Set the vertical sampling dimension to one of: 1, 2 (default), or 4. SEE ALSO
tiffinfo(1), tiffcp(1), libtiff(3) October 15, 1995 RGB2YCBCR(1)
All times are GMT -4. The time now is 05:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy