Sponsored Content
Full Discussion: Duplicate removal
Top Forums UNIX for Advanced & Expert Users Duplicate removal Post 302807079 by vidyadhar85 on Tuesday 14th of May 2013 06:27:12 AM
Old 05-14-2013
try..

Code:
awk '!A[$2$3$5$7$8]++'  filename

 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Removal

I am using Unix as my OS on my server and would like to format my hard drive. How do I go about wiping my hard drive or is there a removal tool that I can use? (1 Reply)
Discussion started by: anaconda
1 Replies

2. UNIX for Dummies Questions & Answers

Removal of Data

Hi All, (And first up, a Happy New Year to you all! 363 days to go!) I need to make sure that our old Sun Sparcstations and Servers are clear of any Confidential data before I return them... Suggestions? I'm formatting the discs to make sure that there's nothing obvious, but would... (5 Replies)
Discussion started by: geralex
5 Replies

3. UNIX for Dummies Questions & Answers

Sort, duplicate removal - Query

Hi All, I have a problem with the sort and duplicate filter command I am using in one of my scripts. I have a '|' delimited file and want to sort and remove duplicates on the 1,2,15 fields. These fields constitute the primary key of the table I will be loading the data into. But I see that some... (4 Replies)
Discussion started by: novice1324
4 Replies

4. Shell Programming and Scripting

Removal of Duplicate Entries from the file

I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File. The example of the File is shown below: 8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies

5. Homework & Coursework Questions

removal of files?

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted! 1. The problem statement, all variables and given/known data: remove all files and only files that the first three characters start with numerals. 2. Relevant commands,... (6 Replies)
Discussion started by: linuxtraining
6 Replies

6. Shell Programming and Scripting

\n removal between two |

I have a multi-line string I'm trying to do some clean-up on. Example: 1|575|67866|"fgnhdgj"|"afhgajh agfajgf ahfjhgfk ahfkhf"|568 2|56|5435|"mayank"|"gupta gdja agdjagf"|677 3|5666|5435|"mayank"|"gupta gdja agdjagf"|677 I need a shell script that replace all \n under " ". (11 Replies)
Discussion started by: mayankgupta18
11 Replies

7. Shell Programming and Scripting

Find duplicate based on 'n' fields and mark the duplicate as 'D'

Hi, In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'. In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Discussion started by: machomaddy
7 Replies

8. Shell Programming and Scripting

Duplicate line removal matching some columns only

I'm looking to remove duplicate rows from a CSV file with a twist. The first row is a header. There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies

9. Web Development

Vbseo removal

first off i want to thank you for such a great site, you helped me narrow down a long search on what was wrong with my forum, i have a large forum motorbicycling DOT com which i had someone remove vbseo before they went under because of all the security problems. anyway i have lots of 404 errors... (13 Replies)
Discussion started by: atcspaul
13 Replies

10. Shell Programming and Scripting

Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file)

I have a script that builds a database ~30 million lines, ~3.7 GB .cvs file. After multiple optimzations It takes about 62 min to bring in and parse all the files and used to take 10 min to remove duplicates until I was requested to add another column. I am using the highly optimized awk code: awk... (34 Replies)
Discussion started by: Michael Stora
34 Replies
PX_GET_RECORD2(3)					     Library Functions Manual						 PX_GET_RECORD2(3)

NAME
PX_get_record2 -- Returns record in Paradox file SYNOPSIS
#include <paradox.h> int PX_get_record2(pxdoc_t *pxdoc, int recno, char *data, int *deleted, pxdatablockinfo_t *pxdbinfo) DESCRIPTION
This function is similar to PX_get_record(3) but takes two extra parameters. If *deleted is set to 1 the function will consider any record in the database, even those which are deleted. If *pxdbinfo is not NULL, the function will return some information about the data block where the record has been read from. You will have to allocate memory for pxdbinfo before calling PX_get_record2. On return *deleted will be set to 1 if the requested record is deleted or 0 if it is not deleted. The struct pxdatablockinfo_t has the fol- lowing fields: blockpos (long) File positon where the block starts. The first six bytes of the block contain the header, followed by the record data. recordpos (long) File position where the requested record starts. size (int) Size of the data block without the six bytes for the header. recno (int) Record number within the data block. The first record in the block has number 0. numrecords (int) The number of records in this block. number (int) The number of the data block. This function may return records with invalid data, because records are not explizitly marked as deleted, but rather the size of a valid data block is modified. A data block is a fixed size area in the file which holds a certain number of records. If for some reason a data block has newer been completely filled with records, the algorithmn anticipates deleted records in this data block, which are not there. This often happens with the last data block in a file, which is likely to not being fully filled with records. If you accessing several records, do it in ascending order, because this is the most efficient way. Note: This function is deprecated. Use PX_retrieve_record(3) instead RETURN VALUE
Returns 0 on success and -1 on failure. SEE ALSO
PX_get_field(3), PX_get_record(3) AUTHOR
This manual page was written by Uwe Steinmann uwe@steinmann.cx. PX_GET_RECORD2(3)
All times are GMT -4. The time now is 05:08 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy