Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Advise to print lines before and after patterh match and checking and removing duplicate files Post 303046126 by RudiC on Friday 24th of April 2020 05:18:42 PM
Old 04-24-2020
Why check for duplicate files if you can avoid producing them in the first place? Try
Code:
$ touch filesdone
$ awk -vLCNT=10 -vPAT="CORRUPTION DETECTED" '
BEGIN           {LCNT++
                }
FNR == 1        {PR = 0
                 print "^" FILENAME "$" >> "filesdone"
                }
                {T[FNR%LCNT] = $0
                }
$0 ~ PAT        {print ""
                 PR = FNR + LCNT
                 for (i=1; i<LCNT; i++) print T[(FNR+i)%LCNT]
                }
FNR < PR
' $(ls $DIR_PATH/alert_${sid}* | grep -vf filesdone) /dev/null

This little script keeps an LCNT (here: 10) deep cyclic buffer of the lines encountered, and, if the search pattern is matched, prints these buffered LCNT lines, the actual line, and LCNT lines to come. Caveat: if the pattern is encountered again BEFORE the latter have been printed, they will stop, and the cycle starts anew with printing the buffer. You may redirect - immediately in awk itself - the results to individual files belonging to the originals.

The actual file name, when first encountered, adorned with BOL and EOL anchors, is retained in a, say, "control file" and will never be treated again. Feel free to put the "control file" anywhere else. Little drawback: you have to touch the "control file" once before the first run to make sure it exists.
The list of files presented to awk is the lsed directory contents with the "already done files" removed by grep's -v option. The /dev/null empty file serves as a dummy to avoid awk reading from terminal / stdin when no new files exist, and all old files fall victim to this procedure.


Give it a shot and report back.

Last edited by RudiC; 04-24-2020 at 06:25 PM..
This User Gave Thanks to RudiC For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Removing duplicate lines ignore case

hi, I have the following input in file: abc ab a AB b c a C B When I use uniq -u file,the out put file is: abc ab AB c v B C (17 Replies)
Discussion started by: hellsd
17 Replies

2. UNIX for Dummies Questions & Answers

removing duplicate lines from a file

Hi, I am trying to remove duplicate lines from a file. For example the contents of example.txt is: this is a test 2342 this is a test 34343 this is a test 43434 and i want to remove the "this is a test" lines only and end up with the numbers in the file, that is, end up with: 2342... (4 Replies)
Discussion started by: ocelot
4 Replies

3. Shell Programming and Scripting

removing duplicate blank lines

Hi, how to remove the blank lines from the file only If we have more than one blank line. thanks rameez (8 Replies)
Discussion started by: rameezrajas
8 Replies

4. Shell Programming and Scripting

removing the duplicate lines in a file

Hi, I need to concatenate three files in to one destination file.In this if some duplicate data occurs it should be deleted. eg: file1: ----- data1 value1 data2 value2 data3 value3 file2: ----- data1 value1 data4 value4 data5 value5 file3: ----- data1 value1 data4 value4 (3 Replies)
Discussion started by: Sharmila_P
3 Replies

5. Shell Programming and Scripting

Removing duplicates from string (not duplicate lines)

please help me in getting following: Input Desired output x="foo" foo x="foo foo" foo x="foo foo" foo x="foo abc foo" foo abc x="foo foo1 foo2" foo foo1 foo2 I need to remove duplicated from string.. (8 Replies)
Discussion started by: vickylife
8 Replies

6. Shell Programming and Scripting

Removing Duplicate Lines per Section

Hello, I am in need of removing duplicate lines from within a file per section. File: ABC1 012345 header ABC2 7890-000 ABC3 012345 Header Table ABC4 ABC5 593.0000 587.4800 ABC5 593.5000 587.6580 <= dup need to remove ABC5 593.5000 ... (5 Replies)
Discussion started by: petersf
5 Replies

7. Shell Programming and Scripting

removing duplicate lines while maintaing coherence with second file

So I have two files. The first file, file1.txt, has lines of numbers separated by commas. file1.txt 10,2,30,50 22,6,3,15,16,100 73,55 78,40,33,30,11 73,55 99,82,85 22,6,3,15,16,100 The second file, file2.txt, has sentences. file2.txt "the cat is fat" "I like eggs" "fish live in... (6 Replies)
Discussion started by: adrunknarwhal
6 Replies

8. Shell Programming and Scripting

Removing a block of duplicate lines from a file

Hi all, I have a file with the data 1 abc 2 123 3 ; 4 rao 5 bell 6 ; 7 call 8 abc 9 123 10 ; 11 rao 12 bell 13 ; (10 Replies)
Discussion started by: raosr020
10 Replies

9. UNIX for Dummies Questions & Answers

Removing a set of Duplicate lines from a file

Hi, How do i remove a set of duplicate lines from a file. My file contains the lines: abc def ghi abc def ghi jkl mno pqr jkl mno (1 Reply)
Discussion started by: raosr020
1 Replies

10. UNIX for Beginners Questions & Answers

Advise on how to print range of lines above and below a number?

Hi, I have attached an output file which is some kind of database file mapping. It is basically like an allocation mapping of a tablespace and its datafile/s. The output is generated by the SQL script that I found from 401 Authorization Required Excerpts of the file are as below: ... (2 Replies)
Discussion started by: newbie_01
2 Replies
CG(1)																	     CG(1)

NAME
cg - Recursively grep for a pattern and store it. SYNOPSIS
cg [ -l ] | [ [ -i ] pattern [ files ] ] DESCRIPTION
cg does a search though text files (usually source code) recursively for a pattern, storing matches and displaying the output in a human- readable fashion. It is intended to give some of the functionaly of AT&T's cscope(1) tool, with the advantages of simplicity and not being language-specific. The script will colorize output if configured as such. It is typically run with a Perl regular expression to search for. The search can be made case insensitive by using the -i option. A list of files may also be specified with an additional argument after the pattern. Put the files pattern in quotes to make it be matched by Perl rather than by the shell. Running the script with no arguments will recall the results of the previous search. After the search, entries found can be edited using the vg(1) script. The -l option shows the last log made. SOME EXAMPLES
cg - alone recalls the previous search results. cg -i pattern - search the default list of files for all files matching the pattern (and case-insensitively). cg pattern '*.c' - search recursively for pattern in all *.c files. This automatically converts '*' to '.*' and '.' to '.' for you and does a Perl pattern match on all files in the tree. cg pattern *.c - search through the shell-expanded list of *.c files, so not done recursively (in other words, only the files your shell pass to the script as arguments). cg -l - show the last log made. COMMAND-LINE OPTIONS -i Do a case-insensitive search. -l Show the last log made. -p Toggle the default pager option. cg has a bulit-in pager function, which can be enabled or disabled by default (in .cgvgrc). If the default is enabled, this option disables the pager; if the default is disabled, this option enables it. -P Force the built-in pager to be disabled. FILES
${HOME}/.cglast Log file of the last search. ${HOME}/.cgvgrc Per-user configuration file (if the defaults are not desireable). ${HOME}/.cgvg/* Log files in $HOSTNAME.shell_pid form with the log of the last search. SEE ALSO
vg(1), perl(1), find(1), grep(1), cscope(1) AUTHOR
cg was written by Joshua Uziel <uzi@uzix.org>. 13 Mar 2002 CG(1)
All times are GMT -4. The time now is 08:58 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy