07-18-2011
Quote:
Originally Posted by
rdcwayx
The requester need the duplicate records.
Overlooked
PHP Code:
awk '_[$1]++==1' filename
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi all,
I have a text file fileA.txt
DXRV|02/28/2006 11:36:49.049|SAC||||CDxAcct=2420991350
DXRV|02/28/2006 11:37:06.404|SAC||||CDxAcct=6070970034
DXRV|02/28/2006 11:37:25.740|SAC||||CDxAcct=2420991350
DXRV|02/28/2006 11:38:32.633|SAC||||CDxAcct=6070970034
DXRV|02/28/2006... (2 Replies)
Discussion started by: sabercats
2 Replies
2. HP-UX
HI:
I know this topic already exist in this forum but not exactly with my problem.
I want to duplicate a disk , my source disk is like 2gb size, while the new disk is like 36 gb size.
The problems:
When I use the command dd it fails, I think because the disk sizes, and the sizes of the... (13 Replies)
Discussion started by: pmoren
13 Replies
3. UNIX for Dummies Questions & Answers
I have the files logged in the file system with names in the format of : filename_ordernumber_date_time
eg:
file_1_12012007_1101.txt
file_2_12022007_1101.txt
file_1_12032007_1101.txt
I need to find out all the files that are logged multiple times with same order number. In the above eg, I... (1 Reply)
Discussion started by: sudheshnaiyer
1 Replies
4. Shell Programming and Scripting
Hi,
I have file which users like
filename ->"readfile", following entries
peter
john
alaska
abcd
xyz
and i have directory /var/
i want to do first cat of "readfile" line by line and first read peter in variable and also cross check with /var/ how many directories are avaialble... (8 Replies)
Discussion started by: learnbash
8 Replies
5. Shell Programming and Scripting
I am looking for a way to delete duplicate entries in a VERY large file (approx 2gb)
However I need to compare several fields before determining if this is a duplicate. I setup a hash in perl but it seems to not function correctly.
Any help appreciated.
of the 19 comma separated fields I... (2 Replies)
Discussion started by: Goyde
2 Replies
6. Shell Programming and Scripting
I've been working on a script (/bin/sh) in which I have requested and received help here (in which I am very grateful for!). The client has modified their requirements (a tad), so without messing up the script to much, I come once again for assistance.
Here are the file.dat contents:
ABC1... (4 Replies)
Discussion started by: petersf
4 Replies
7. Shell Programming and Scripting
Hi,
In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'.
In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Discussion started by: machomaddy
7 Replies
8. Shell Programming and Scripting
Hi All,
i have file like
ID|Indiv_ID
12345|10001
|10001
|10001
23456|10002
|10002
|10002
|10002
|10003
|10004
if indiv_id having duplicate values and corresponding ID column is null then copy the id.
I need output like:
ID|Indiv_ID
12345|10001... (11 Replies)
Discussion started by: bmk
11 Replies
9. Shell Programming and Scripting
Dear folks
I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies
10. UNIX for Beginners Questions & Answers
I have a job that produces a file of barcodes that gets added to every time the job runs
I want to check the list to see if the barcode is already in the list and report it out if it is. (3 Replies)
Discussion started by: worky
3 Replies
LEARN ABOUT SUSE
dupdb-admin
dupdp-admin(1) General Commands Manual dupdp-admin(1)
NAME
dupdb-admin - Manage the duplicate database for apport-retrace.
SYNOPSIS
dupdb-admin -f dbpath status
dupdb-admin -f dbpath dump
dupdb-admin -f dbpath changeid oldid newid
DESCRIPTION
apport-retrace(1) has the capability of checking for duplicate bugs (amonst other things). It uses an SQLite database for keeping track of
master bugs. dupdb-admin is a small tool to manage that database.
The central concept in that database is a "crash signature", a string that uniquely identifies a particular crash. It is built from the
executable path name, the signal number or exception name, and the topmost functions of the stack trace.
The database maps crash signatures to the 'master' crash id and thus can close duplicate crash reports with a reference to that master ID.
It also tracks the status of crashes (open/fixed in a particular version) to be able to identify regressions.
MODES
status Print general status of the duplicate db. For now, it only shows the time when the database was "consolidated" last, i. e. when the
bug states (open/fixed) in the SQLite database where updated to the actual states in the bug tracking system.
dump Print a list of all database entries.
changeid
Change the associated crash ID for a particular crash.
OPTIONS
-f path, --database-file=path
Instead of processing the new crash reports in /var/crash/, report a particular report in an arbitrary file location. This is use-
ful for copying a crash report to a machine with internet connection and reporting it from there. This defaults to ~./apport_dupli-
cates.db.
AUTHOR
apport and the accompanying tools are developed by Martin Pitt <martin.pitt@ubuntu.com>.
Martin Pitt August 01, 2007 dupdp-admin(1)