Sponsored Content
Top Forums Shell Programming and Scripting Removing dupes within 2 delimited areas in a large dictionary file Post 302740273 by Jotne on Thursday 6th of December 2012 01:10:29 AM
Old 12-06-2012
You may use some like this:
Code:
awk '/DATA/,/END/'

to get data between DATA and END, then sort it.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Issue with Removing Carriage Return (^M) in delimited file

Hi - I tried to remove ^M in a delimited file using "tr -d "\r" and "sed 's/^M//g'", but it does not work quite well. While the ^M is removed, the format of the record is still cut in half, like a,b, c c,d,e The delimited file is generated using sh script by outputing a SQL query result to... (7 Replies)
Discussion started by: sirahc
7 Replies

2. Shell Programming and Scripting

Removing blanks in a text tab delimited file

Hi Experts I am very new to perl and need to make a script using perl. I would like to remove blanks in a text tab delimited file in in a specfic column range ( colum 21 to column 43) sample input and output shown below : Input: 117 102 650 652 654 656 117 93 95... (3 Replies)
Discussion started by: Faisal Riaz
3 Replies

3. Shell Programming and Scripting

Removing Embedded Newline from Delimited File

Hey there - a bit of background on what I'm trying to accomplish, first off. I am trying to load the data from a pipe delimited file into a database. The loading tool that I use cannot handle embedded newline characters within a field, so I need to scrub them out. Solutions that I have tried... (7 Replies)
Discussion started by: bbetteridge
7 Replies

4. Shell Programming and Scripting

Large pipe delimited file that I need to add CR/LF every n fields

I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record. Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies

5. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

6. Shell Programming and Scripting

Script Optimization - large delimited file, for loop with many greps

Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions. ls file.list>$split for gsfile in `cat $split`; do csplit... (17 Replies)
Discussion started by: verge
17 Replies

7. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

8. Shell Programming and Scripting

Merging dupes on different lines in a dictionary

I am working on a homonym dictionary of names i.e. names which are clustered together according to their “sound-alike” pronunciation: An example will make this clear: Since the dictionary is manually constructed it often happens that inadvertently two sets of “homonyms” which should be grouped... (2 Replies)
Discussion started by: gimley
2 Replies

9. UNIX for Advanced & Expert Users

Need optimized awk/perl/shell to give the statistics for the Large delimited file

I have a file size is around 24 G with 14 columns, delimiter with "|" My requirement- can anyone provide me the fastest and best to get the below results Number of records of the file First column and second Column- Unique counts Thanks for your time Karti ------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies

10. Shell Programming and Scripting

Remove dupes in a large file

I have a large file 1.5 gb and want to sort the file. I used the following AWK script to do the job !x++ The script works but it is very slow and takes over an hour to do the job. I suspect this is because the file is not sorted. Any solution to speed up the AWk script or a Perl script would... (4 Replies)
Discussion started by: gimley
4 Replies
WTOC(1) 						      General Commands Manual							   WTOC(1)

NAME
wtoc - Convert a Wnn text-form dictionary (or dictionaries) into Canna text-form dictionaries SYNOPSIS
wtoc [-f hinshidata] [wnnjisho] [cannajisho] DESCRIPTION
wtoc converts a Wnn text-form dictionary file into Canna text-form dictionary file. If all dictionary files are omitted, the Wnn dictio- nary data is input through the standard input. In this case, the dictionary of the Japanes Input System is output from the standard out- put. If one dictionary file is specified, it will be regarded as a Wnn dictionary. At this time, Canna dictionary output to the standard output. OPTIONS
-f hinshidata The user can add new information about word-type correspondence between Wnn and Canna. The following word-type correspondence information must be described in the hinshidata file. Within one line, describe the Wnn word type name and the Canna word type while delimiting by a space(s) or tab. Wnn word type Canna word type Adverb #F04 EXAMPLE
% wtoc -f tsuikahinshi kihon.u kihon.t Inputs word-type correspondence information from tsuikahinshi, then converts Wnn text-form dictionary kihon.u into the Canna text-form dic- tionary before output. % wtoc special.u | lpr Converts Wnn text-form dictionary special.u into Canna text-form dictionary, then outputs the result to the line printer. SEE ALSO
ctow(1) WTOC(1)
All times are GMT -4. The time now is 10:39 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy