Sponsored Content
Full Discussion: Remove duplicates
Top Forums Shell Programming and Scripting Remove duplicates Post 302898180 by dtdt on Saturday 19th of April 2014 10:43:00 PM
Old 04-19-2014
Remove duplicates

I have a file with the following format:

fields seperated by "|"

title1|something class|long...content1|keys
title2|somhing class|log...content1|kes
title1|sothing class|lon...content1|kes
title3|shing cls|log...content1|ks

I want to remove all duplicates with the same "title field"(the 1st column) regardless anything following in that line.

thanks a lot.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Remove duplicates

Hello Experts, I have two files named old and new. Below are my example files. I need to compare and print the records that only exist in my new file. I tried the below awk script, this script works perfectly well if the records have exact match, the issue I have is my old file has got extra... (4 Replies)
Discussion started by: forumthreads
4 Replies

2. Shell Programming and Scripting

Script to remove duplicates

Hi I need a script that removes the duplicate records and write it to a new file for example I have a file named test.txt and it looks like abcd.23 abcd.24 abcd.25 qwer.25 qwer.26 qwer.98 I want to pick only $1 and compare with the next record and the output should be abcd.23... (6 Replies)
Discussion started by: antointoronto
6 Replies

3. Shell Programming and Scripting

bash - remove duplicates

I need to use a bash script to remove duplicate files from a download list, but I cannot use uniq because the urls are different. I need to go from this: http://***/fae78fe/file1.wmv http://***/39du7si/file1.wmv http://***/d8el2hd/file2.wmv http://***/h893js3/file2.wmv to this: ... (2 Replies)
Discussion started by: locoroco
2 Replies

4. UNIX for Dummies Questions & Answers

Remove duplicates from a file

Can u tell me how to remove duplicate records from a file? (11 Replies)
Discussion started by: saga20
11 Replies

5. Shell Programming and Scripting

Awk: Remove Duplicates

I have the following code for removing duplicate records based on fields in inputfile file & moves the duplicate records in duplicates file(1st Awk) & in 2nd awk i fetch the non duplicate entries in inputfile to tmp file and use move to update the original file. Requirement: Can both the awk... (4 Replies)
Discussion started by: siramitsharma
4 Replies

6. Shell Programming and Scripting

awk remove first duplicates

Hi All, I have searched many threads for possible close solution. But I was unable to get simlar scenario. I would like to print all duplicate based on 3rd column except the first occurance. Also would like to print if it is single entry(non-duplicate). i/P file 12 NIL ABD LON 11 NIL ABC... (6 Replies)
Discussion started by: sybadm
6 Replies

7. Shell Programming and Scripting

Help with merge and remove duplicates

Hi all, I need some help to remove duplicates from a file before merging. I have got 2 files: file1 has data in format 4300 23456 4301 2357 the 4 byte values on the right hand side is uniq, and are not repeated anywhere in the file file 2 has data in same format but is not in... (10 Replies)
Discussion started by: roy121
10 Replies

8. Shell Programming and Scripting

Sort and Remove duplicates

Here is my task : I need to sort two input files and remove duplicates in the output files : Sort by 13 characters from 97 Ascending Sort by 1 characters from 96 Ascending If duplicates are found retain the first value in the file the input files are variable length, convert... (4 Replies)
Discussion started by: ysvsr1
4 Replies

9. Shell Programming and Scripting

Remove duplicates

Hi I have a below file structure. 200,1245,E1,1,E1,,7611068,KWH,30, ,,,,,,,, 200,1245,E1,1,E1,,7611070,KWH,30, ,,,,,,,, 300,20140223,0.001,0.001,0.001,0.001,0.001 300,20140224,0.001,0.001,0.001,0.001,0.001 300,20140225,0.001,0.001,0.001,0.001,0.001 300,20140226,0.001,0.001,0.001,0.001,0.001... (1 Reply)
Discussion started by: tejashavele
1 Replies

10. Shell Programming and Scripting

How to remove duplicates using for loop?

values=(1 2 3 5 4 2 3 1 6 8 3 5 ) #i need the output like this by removing the duplicates 1 2 3 5 4 6 8 #i dont need sorting in my program #plz explain me as simple using for loop #os-ubuntu ,shell=bash (5 Replies)
Discussion started by: Meeran Rizvi
5 Replies
Tangram::Type::Array::FromOne(3pm)			User Contributed Perl Documentation			Tangram::Type::Array::FromOne(3pm)

NAME
Tangram::Type::Array::FromOne - map Perl arrays using a foreign key SYNOPSIS
use Tangram; # or use Tangram::Core; use Tangram::Type::Array::FromOne; $schema = Tangram::Schema->new( classes => { Agenda => { fields => { iarray => { # long form entries => { class => 'Entry', coll => 'agenda', }, # or (short form) entries => 'Entry', } DESCRIPTION
This class maps references to a Perl array in an intrusive fashion. The persistent fields are grouped in a hash under the "iarray" key in the field hash. The array may contain only objects of persistent classes. These classes must have a common persistent base class. Tangram uses two columns on the element's table to store: * the id of the object containing the collection * the position of the element in the collection CAUTION: the same object may not be an element of the same collection, in two different objects. This mapping may be used only for one-to- many relationships. The field names are passed in a hash that associates a field name with a field descriptor. The field descriptor may be either a hash or a string. The hash uses the following fields: * class * aggreg * back * coll * slot * deep_update Mandatory field "class" specifies the class of the elements. Optional field "aggreg" specifies that the elements of the collection must be removed (erased) from persistent storage along with the con- taining object. The default is not to aggregate. Optional field "back" sets the name of a field that is inserted in the elements. That field acts as a demand-loaded, read-only reference to the object containing the collection. Optional field "coll" sets the name the column containing the id of the containing object. This defaults to 'C_m', where 'C' is the class of the containing object, and 'm' is the field name. Optional field "slot" sets the name the column containing the id of the containing object. This defaults to 'C_m_slot', where 'C' is the class of the containing object, and 'm' is the field name. The "C" in C_m and C_m_slot are passed through the schema normalisation function before being combined into a column name. Optional field "deep_update" specificies that all elements have to be updated automatically when "update" is called on the collection object. Automatic update ensures consisitency between the Perl representation and the DBMS state, but degrades update performance so use it with caution. The default is not to do automatic updates. If the descriptor is a string, it is interpreted as the name of the element's class. This is equivalent to specifying only the "class" field in the hash variant. perl v5.8.8 2006-03-29 Tangram::Type::Array::FromOne(3pm)
All times are GMT -4. The time now is 06:22 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy