Sponsored Content
Top Forums Shell Programming and Scripting awk and seen to report duplicates Post 303025938 by vgersh99 on Thursday 15th of November 2018 12:36:51 PM
Old 11-15-2018
and what's the desired output based on your sample input?
Kind of hard to understand the explanation - maybe the output would help.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

help in awk report

home directory = /export/home/jenovaux/ /log Contain 3 file /filename1.log /filename2.log /filename3.log each file from this file is a log for job each file contain success or failed I want to make awk report as the following:- LOGFILENAME ... (1 Reply)
Discussion started by: jenovaux
1 Replies

2. Shell Programming and Scripting

Awk to find duplicates in 2nd field

I want to find duplicates in file on 2nd field i wrote this code: nawk '{a++} END{for i in a {if (a>1) print}}' temp Could not find whats wrong with this. Appreciate help (5 Replies)
Discussion started by: pinnacle
5 Replies

3. Shell Programming and Scripting

Awk Help - duplicates in $1 that match x & y in $2

I'm primarily a "Windows" systems administrator whose been getting his toes in the Linux waters. I am new to programming and advanced scripting so please bear with me and my incomplete example below. I have exported all entries from our DNS zones. I used sed to remove everything other than the... (3 Replies)
Discussion started by: Omaplata
3 Replies

4. Shell Programming and Scripting

awk statement to eliminate the duplicates

consider the below output cat tablextract2.sql CREATE PROCEDURE after72DeleteTgr(id int) BEGIN END $$ Delimiter ; CREATE PROCEDURE after72DeleteTgr(id int) BEGIN END $$ Delimiter ; # # proc_name1="after72DeleteTgr" # # echo "`awk '{if($3~v){a=1}}a;/elimiter\|DELIMITER/{exit}'... (17 Replies)
Discussion started by: vivek d r
17 Replies

5. Shell Programming and Scripting

Find duplicates in column 1 and merge their lines (awk?)

Hi, I have a file (sorted by sort) with 8 tab delimited columns. The first column contains duplicated fields and I need to merge all these identical lines. My input file: comp100002 aaa bbb ccc ddd eee fff ggg comp100003 aba aba aba aba aba aba aba comp100003 fff fff fff fff fff fff fff... (5 Replies)
Discussion started by: falcox
5 Replies

6. Shell Programming and Scripting

Creating duplicates in awk

Hi, I am using Ubuntu 12.04 I have a file as following: KHO123 KHO245 KHO456 . . .I want to add a second column of characters to my file but I want to write a script to make this automatic, so, depending on the number of the lines in my first column, I get the string I need repeated... (4 Replies)
Discussion started by: Homa
4 Replies

7. Shell Programming and Scripting

Awk: Remove Duplicates

I have the following code for removing duplicate records based on fields in inputfile file & moves the duplicate records in duplicates file(1st Awk) & in 2nd awk i fetch the non duplicate entries in inputfile to tmp file and use move to update the original file. Requirement: Can both the awk... (4 Replies)
Discussion started by: siramitsharma
4 Replies

8. Shell Programming and Scripting

awk remove first duplicates

Hi All, I have searched many threads for possible close solution. But I was unable to get simlar scenario. I would like to print all duplicate based on 3rd column except the first occurance. Also would like to print if it is single entry(non-duplicate). i/P file 12 NIL ABD LON 11 NIL ABC... (6 Replies)
Discussion started by: sybadm
6 Replies

9. Shell Programming and Scripting

awk - Remove duplicates during array build

Greetings Experts, Issue: Within awk script, remove the duplicate occurrences that are space (1 single space character) separated Description: I am processing 2 files using awk and during processing, I am building an array and there are duplicates on this; how can I delete the duplicates... (3 Replies)
Discussion started by: chill3chee
3 Replies

10. Shell Programming and Scripting

awk to Sum columns when other column has duplicates and append one column value to another with Care

Hi Experts, Please bear with me, i need help I am learning AWk and stuck up in one issue. First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique. Second point : For... (1 Reply)
Discussion started by: as7951
1 Replies
MSGUNIQ(1)								GNU								MSGUNIQ(1)

NAME
msguniq - unify duplicate translations in message catalog SYNOPSIS
msguniq [OPTION] [INPUTFILE] DESCRIPTION
Unifies duplicate translations in a translation catalog. Finds duplicate translations of the same message ID. Such duplicates are invalid input for other programs like msgfmt, msgmerge or msgcat. By default, duplicates are merged together. When using the --repeated option, only duplicates are output, and all other messages are discarded. Comments and extracted comments will be cumulated, except that if --use-first is specified, they will be taken from the first translation. File positions will be cumulated. When using the --unique option, duplicates are discarded. Mandatory arguments to long options are mandatory for short options too. Input file location: INPUTFILE input PO file -D, --directory=DIRECTORY add DIRECTORY to list for input files search If no input file is given or if it is -, standard input is read. Output file location: -o, --output-file=FILE write output to specified file The results are written to standard output if no output file is specified or if it is -. Message selection: -d, --repeated print only duplicates -u, --unique print only unique messages, discard duplicates Output details: -t, --to-code=NAME encoding for output --use-first use first available translation for each message, don't merge several translations -e, --no-escape do not use C escapes in output (default) -E, --escape use C escapes in output, no extended chars --force-po write PO file even if empty -i, --indent write the .po file using indented style --no-location do not write '#: filename:line' lines -n, --add-location generate '#: filename:line' lines (default) --strict write out strict Uniforum conforming .po file -w, --width=NUMBER set output page width --no-wrap do not break long message lines, longer than the output page width, into several lines -s, --sort-output generate sorted output -F, --sort-by-file sort output by file location Informative output: -h, --help display this help and exit -V, --version output version information and exit AUTHOR
Written by Bruno Haible. REPORTING BUGS
Report bugs to <bug-gnu-gettext@gnu.org>. COPYRIGHT
Copyright (C) 2001-2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU- LAR PURPOSE. SEE ALSO
The full documentation for msguniq is maintained as a Texinfo manual. If the info and msguniq programs are properly installed at your site, the command info msguniq should give you access to the complete manual. GNU gettext 0.11.4 July 2002 MSGUNIQ(1)
All times are GMT -4. The time now is 09:10 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy