Sponsored Content
Top Forums Shell Programming and Scripting Removing dupes within 2 delimited areas in a large dictionary file Post 302740227 by spacebar on Wednesday 5th of December 2012 09:55:15 PM
Old 12-05-2012
Perl script:
Code:
my $in_file     =  '/temp/tmp/t'; # file contains your example data
my $out_file    =  '/temp/tmp/new_file.txt'; # New file with dups removed
my $line;
my @inla;
my @outla;

open ( my $in_file_fh,  '<', $in_file  ) or die "Can't open $in_file $!\n";
open ( my $out_file_fh, '>', $out_file ) or die "Can't open $out_file $!\n";

DATA: while ( $line = <$in_file_fh> ) {  # Read input file
        # If line starts with '#DATA' write to out file
        # also write to out file the next line which is a '#VALID x' line
        if ( $line =~ /^\#DATA/ ) {
          print $out_file_fh $line;
          foreach (1..1) {
            $line = <$in_file_fh>;
            print $out_file_fh $line;
          }
          # Read lines until '#END' line is read
          while ( $line = <$in_file_fh> ) {
            if ( $line =~ /^\#END/ ){
              # Create an anoymous hash from input lines in arrary(@inla) which removes duplicates
              # and place results in array: @outla
              @outla = keys %{{ map{$_=>1}@inla}};
              # write sorted out array to out file
              print $out_file_fh ( sort @outla );
              # write '#END' line to out file
              print $out_file_fh $line;
              # exit inner loop back to main loop and start over
              last DATA;
            }
            # Load lines between the '#VALID x' line and the '#END' line into array
            push @inla, $line;
          }
        }
      }


Code:
$ cat new_file.txt
#DATA
#VALID 1
a
all
an
and
are
as
awk
below
case
could
data
does
dupes
duplicates
ends
english
examples
file
find
footer
from
given
happens
have
header
however
i
identify
in
input
is
issue
it
language
large
need
not
of
or
output
perl
perso-arabic
real
removing
repeated
result
sample
script
section
since
so
sort
that
the
them
time
up
what
which
with
within
words
#END

This User Gave Thanks to spacebar For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Issue with Removing Carriage Return (^M) in delimited file

Hi - I tried to remove ^M in a delimited file using "tr -d "\r" and "sed 's/^M//g'", but it does not work quite well. While the ^M is removed, the format of the record is still cut in half, like a,b, c c,d,e The delimited file is generated using sh script by outputing a SQL query result to... (7 Replies)
Discussion started by: sirahc
7 Replies

2. Shell Programming and Scripting

Removing blanks in a text tab delimited file

Hi Experts I am very new to perl and need to make a script using perl. I would like to remove blanks in a text tab delimited file in in a specfic column range ( colum 21 to column 43) sample input and output shown below : Input: 117 102 650 652 654 656 117 93 95... (3 Replies)
Discussion started by: Faisal Riaz
3 Replies

3. Shell Programming and Scripting

Removing Embedded Newline from Delimited File

Hey there - a bit of background on what I'm trying to accomplish, first off. I am trying to load the data from a pipe delimited file into a database. The loading tool that I use cannot handle embedded newline characters within a field, so I need to scrub them out. Solutions that I have tried... (7 Replies)
Discussion started by: bbetteridge
7 Replies

4. Shell Programming and Scripting

Large pipe delimited file that I need to add CR/LF every n fields

I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record. Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies

5. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

6. Shell Programming and Scripting

Script Optimization - large delimited file, for loop with many greps

Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions. ls file.list>$split for gsfile in `cat $split`; do csplit... (17 Replies)
Discussion started by: verge
17 Replies

7. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

8. Shell Programming and Scripting

Merging dupes on different lines in a dictionary

I am working on a homonym dictionary of names i.e. names which are clustered together according to their “sound-alike” pronunciation: An example will make this clear: Since the dictionary is manually constructed it often happens that inadvertently two sets of “homonyms” which should be grouped... (2 Replies)
Discussion started by: gimley
2 Replies

9. UNIX for Advanced & Expert Users

Need optimized awk/perl/shell to give the statistics for the Large delimited file

I have a file size is around 24 G with 14 columns, delimiter with "|" My requirement- can anyone provide me the fastest and best to get the below results Number of records of the file First column and second Column- Unique counts Thanks for your time Karti ------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies

10. Shell Programming and Scripting

Remove dupes in a large file

I have a large file 1.5 gb and want to sort the file. I used the following AWK script to do the job !x++ The script works but it is very slow and takes over an hour to do the job. I suspect this is because the file is not sorted. Any solution to speed up the AWk script or a Perl script would... (4 Replies)
Discussion started by: gimley
4 Replies
h5jam(1)						      General Commands Manual							  h5jam(1)

NAME
h5jam - Add a user block to a HDF5 file SYNOPSIS
h5jam -u user_block -i in_file.h5 [-o out_file.h5] [--clobber] DESCRIPTION
h5jam concatenates a user_block file and an HDF5 file to create an HDF5 file with a user block. The user block can be either binary or text. The output file is padded so that the HDF5 header begins on byte 512, 1024, etc.. (See the HDF5 File Format.) If out_file.h5 is given, a new file is created with the user_block followed by the contents of in_file.h5. In this case, infile.h5 is unchanged. If out_file.h5 is not specified, the user_block is added to in_file.h5. If in_file.h5 already has a user block, the contents of user_block will be added to the end of the existing user block, and the file shifted to the next boundary. If --clobber is set, any existing user block will be overwritten. EXAMPLE USAGE
Create new file, newfile.h5, with the text in file mytext.txt as the user block for the HDF5 file file.h5. h5jam -u mytext.txt -i file.h5 -o newfile.h5 Add text in file mytext.txt to front of HDF5 dataset, file.h5. h5jam -u mytext.txt -i file.h5 Overwrite the user block (if any) in file.h5 with the contents of mytext.txt. h5jam -u mytext.txt -i file.h5 --clobber RETURN VALUE
h5jam returns the size of the output file, or -1 if an error occurs. CAVEATS
This tool copies all the data (sequentially) in the file(s) to new offsets. For a large file, this copy will take a long time. The most efficient way to create a user block is to create the file with a user block (see H5Pset_user_block), and write the user block data into that space from a program. The user block is completely opaque to the HDF5 library and to the h5jam and h5unjam tools. The user block is simply read or written as a string of bytes, which could be text or any kind of binary data. It is up to the user to know what the contents of the user block means and how to process it. When the user block is extracted, all the data is written to the output, including any padding or unwritten data. This tool moves the HDF5 file through byte copies, i.e., it does not read or interpret the HDF5 objects. SEE ALSO
h5dump(1), h5ls(1), h5diff(1), h5import(1), gif2h5(1), h52gif(1), h5perf(1), h5unjam(1). h5jam(1)
All times are GMT -4. The time now is 11:25 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy