Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Advise to print lines before and after patterh match and checking and removing duplicate files Post 303046173 by newbie_01 on Sunday 26th of April 2020 10:58:51 PM
Old 04-26-2020
Hi Jim,


The grep works in Linux but not in Solaris. Sorry, forgot to mention, OS is SunOS <hostname> 5.11 11.3 sun4v sparc sun4v


Yeah, the code below works and files.tmp did has the list of files with their checksum, I only need to retain one of the files. Trying to work out how to sort the output AND retain just the lowest numbered file.



Code:
cd /path/to/logs

grep -l "CORRUPTION DETECTED" *.log  |
while read fname
do
   cksum $fname
done | sort -n -k1 > files.tmp
# files.tmp has a sorted list of files - by checksum




Code:
$: cat files.tmp
1237008222      10664   log.10
1237008222      10664   log.12
1237008222      10664   log.14
1237008222      10664   log.16
1237008222      10664   log.18
1237008222      10664   log.2
1237008222      10664   log.4
1237008222      10664   log.6
1237008222      10664   log.8
2296620157      10696   log.1
2296620157      10696   log.11
2296620157      10696   log.13
2296620157      10696   log.15
2296620157      10696   log.17
2296620157      10696   log.3
2296620157      10696   log.5
2296620157      10696   log.7
2296620157      10696   log.9


So from the list above, I will only want to retain log.1 and log.2, so kinda like group the output list above by checksum and retain the lowest number named file. Googling at the moment if there is an easier of deleting from the files.tmp list besides how am doing it below:


Code:
#!/bin/ksh
#

awk '{ print $1 }' files.tmp | sort | uniq > tmp.00

while read checksum
do
   grep "^$checksum" files.tmp | sort | sort -n -t. -k2 | awk 'NR>1 { print $3 }' | xargs rm
done < tmp.00


BTW, what is the code here below. I think there is something missing here, is oldfile supposedly the script that does the checksum and then I run the code below?



Code:
oldsum=0
oldfile
ls logfile* | 
while read sum size name
do
   if [  "$sum" -eq $oldsum ] ; then
      echo "$oldname and $name are duplicates"
      # put a rm command here after you see this work correctly for you
      # assuming you delete the second file name
      continue
   fi
   oldname=$name
   oldsum=$sum
done

 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Removing duplicate lines ignore case

hi, I have the following input in file: abc ab a AB b c a C B When I use uniq -u file,the out put file is: abc ab AB c v B C (17 Replies)
Discussion started by: hellsd
17 Replies

2. UNIX for Dummies Questions & Answers

removing duplicate lines from a file

Hi, I am trying to remove duplicate lines from a file. For example the contents of example.txt is: this is a test 2342 this is a test 34343 this is a test 43434 and i want to remove the "this is a test" lines only and end up with the numbers in the file, that is, end up with: 2342... (4 Replies)
Discussion started by: ocelot
4 Replies

3. Shell Programming and Scripting

removing duplicate blank lines

Hi, how to remove the blank lines from the file only If we have more than one blank line. thanks rameez (8 Replies)
Discussion started by: rameezrajas
8 Replies

4. Shell Programming and Scripting

removing the duplicate lines in a file

Hi, I need to concatenate three files in to one destination file.In this if some duplicate data occurs it should be deleted. eg: file1: ----- data1 value1 data2 value2 data3 value3 file2: ----- data1 value1 data4 value4 data5 value5 file3: ----- data1 value1 data4 value4 (3 Replies)
Discussion started by: Sharmila_P
3 Replies

5. Shell Programming and Scripting

Removing duplicates from string (not duplicate lines)

please help me in getting following: Input Desired output x="foo" foo x="foo foo" foo x="foo foo" foo x="foo abc foo" foo abc x="foo foo1 foo2" foo foo1 foo2 I need to remove duplicated from string.. (8 Replies)
Discussion started by: vickylife
8 Replies

6. Shell Programming and Scripting

Removing Duplicate Lines per Section

Hello, I am in need of removing duplicate lines from within a file per section. File: ABC1 012345 header ABC2 7890-000 ABC3 012345 Header Table ABC4 ABC5 593.0000 587.4800 ABC5 593.5000 587.6580 <= dup need to remove ABC5 593.5000 ... (5 Replies)
Discussion started by: petersf
5 Replies

7. Shell Programming and Scripting

removing duplicate lines while maintaing coherence with second file

So I have two files. The first file, file1.txt, has lines of numbers separated by commas. file1.txt 10,2,30,50 22,6,3,15,16,100 73,55 78,40,33,30,11 73,55 99,82,85 22,6,3,15,16,100 The second file, file2.txt, has sentences. file2.txt "the cat is fat" "I like eggs" "fish live in... (6 Replies)
Discussion started by: adrunknarwhal
6 Replies

8. Shell Programming and Scripting

Removing a block of duplicate lines from a file

Hi all, I have a file with the data 1 abc 2 123 3 ; 4 rao 5 bell 6 ; 7 call 8 abc 9 123 10 ; 11 rao 12 bell 13 ; (10 Replies)
Discussion started by: raosr020
10 Replies

9. UNIX for Dummies Questions & Answers

Removing a set of Duplicate lines from a file

Hi, How do i remove a set of duplicate lines from a file. My file contains the lines: abc def ghi abc def ghi jkl mno pqr jkl mno (1 Reply)
Discussion started by: raosr020
1 Replies

10. UNIX for Beginners Questions & Answers

Advise on how to print range of lines above and below a number?

Hi, I have attached an output file which is some kind of database file mapping. It is basically like an allocation mapping of a tablespace and its datafile/s. The output is generated by the SQL script that I found from 401 Authorization Required Excerpts of the file are as below: ... (2 Replies)
Discussion started by: newbie_01
2 Replies
LOG2PCAP(1)							   User Commands						       LOG2PCAP(1)

NAME
log2pcap - Extract network traces from Samba log files SYNOPSIS
log2pcap [-h] [-q] [logfile] [pcap_file] DESCRIPTION
This tool is part of the samba(7) suite. log2pcap reads in a samba log file and generates a pcap file (readable by most sniffers, such as ethereal or tcpdump) based on the packet dumps in the log file. The log file must have a log level of at least 5 to get the SMB header/parameters right, 10 to get the first 512 data bytes of the packet and 50 to get the whole packet. OPTIONS
-h If this parameter is specified the output file will be a hex dump, in a format that is readable by the text2pcap utility. -q Be quiet. No warning messages about missing or incomplete data will be given. logfile Samba log file. log2pcap will try to read the log from stdin if the log file is not specified. pcap_file Name of the output file to write the pcap (or hexdump) data to. If this argument is not specified, output data will be written to stdout. -?|--help Print a summary of command line options. EXAMPLES
Extract all network traffic from all samba log files: $ log2pcap < /var/log/* > trace.pcap Convert to pcap using text2pcap: $ log2pcap -h samba.log | text2pcap -T 139,139 - trace.pcap VERSION
This man page is correct for version 3 of the Samba suite. BUGS
Only SMB data is extracted from the samba logs, no LDAP, NetBIOS lookup or other data. The generated TCP and IP headers don't contain a valid checksum. SEE ALSO
text2pcap(1), ethereal(1) AUTHOR
The original Samba software and related utilities were created by Andrew Tridgell. Samba is now developed by the Samba Team as an Open Source project similar to the way the Linux kernel is developed. This manpage was written by Jelmer Vernooij. Samba 4.0 06/17/2014 LOG2PCAP(1)
All times are GMT -4. The time now is 03:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy