Sponsored Content
Top Forums Shell Programming and Scripting Removing dupes within 2 delimited areas in a large dictionary file Post 302740867 by gimley on Friday 7th of December 2012 02:14:31 AM
Old 12-07-2012
Hello,
Sorry my Broadband was down and I could not check out the perl script. It works beautifully on ASCII data (8-bit). As soon as UTF8 or UTF16 data is addressed, no output is visible.
Does PERL give problems with Unicode?
Since my data is in Perso-Arabic, the script does not work.
Any round-about way to solve the problem. I am using the latest version of ActiveState Perl and in despair even downloaded strawberry perl but the data does not work.
I am attaching the zip file containing data in UTF8 format with Hindi as an example. There are two files testdic and testdic.out
Many thanks for the beautifully commented script. I modified it slightly as under to take input and output from command line:
Code:
#!/usr/bin/perl
my $line;
my @inla;
my @outla;

The rest of the code remains the same.
I do not think this would affect accessing a UTF8 file.
Many thanks once again
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Issue with Removing Carriage Return (^M) in delimited file

Hi - I tried to remove ^M in a delimited file using "tr -d "\r" and "sed 's/^M//g'", but it does not work quite well. While the ^M is removed, the format of the record is still cut in half, like a,b, c c,d,e The delimited file is generated using sh script by outputing a SQL query result to... (7 Replies)
Discussion started by: sirahc
7 Replies

2. Shell Programming and Scripting

Removing blanks in a text tab delimited file

Hi Experts I am very new to perl and need to make a script using perl. I would like to remove blanks in a text tab delimited file in in a specfic column range ( colum 21 to column 43) sample input and output shown below : Input: 117 102 650 652 654 656 117 93 95... (3 Replies)
Discussion started by: Faisal Riaz
3 Replies

3. Shell Programming and Scripting

Removing Embedded Newline from Delimited File

Hey there - a bit of background on what I'm trying to accomplish, first off. I am trying to load the data from a pipe delimited file into a database. The loading tool that I use cannot handle embedded newline characters within a field, so I need to scrub them out. Solutions that I have tried... (7 Replies)
Discussion started by: bbetteridge
7 Replies

4. Shell Programming and Scripting

Large pipe delimited file that I need to add CR/LF every n fields

I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record. Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies

5. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

6. Shell Programming and Scripting

Script Optimization - large delimited file, for loop with many greps

Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions. ls file.list>$split for gsfile in `cat $split`; do csplit... (17 Replies)
Discussion started by: verge
17 Replies

7. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

8. Shell Programming and Scripting

Merging dupes on different lines in a dictionary

I am working on a homonym dictionary of names i.e. names which are clustered together according to their “sound-alike” pronunciation: An example will make this clear: Since the dictionary is manually constructed it often happens that inadvertently two sets of “homonyms” which should be grouped... (2 Replies)
Discussion started by: gimley
2 Replies

9. UNIX for Advanced & Expert Users

Need optimized awk/perl/shell to give the statistics for the Large delimited file

I have a file size is around 24 G with 14 columns, delimiter with "|" My requirement- can anyone provide me the fastest and best to get the below results Number of records of the file First column and second Column- Unique counts Thanks for your time Karti ------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies

10. Shell Programming and Scripting

Remove dupes in a large file

I have a large file 1.5 gb and want to sort the file. I used the following AWK script to do the job !x++ The script works but it is very slow and takes over an hour to do the job. I suspect this is because the file is not sorted. Any solution to speed up the AWk script or a Perl script would... (4 Replies)
Discussion started by: gimley
4 Replies
CATOD(1)						      General Commands Manual							  CATOD(1)

NAME
catod - To convert the text format of a dictionary to binary format. SYNOPSIS
catod [-s maxword ] [-R] [-r] [-e] [-S] [-U] [-P dicpasswd ] [-p frepasswd ] [-h cixingfile ] outfilename DEFAULT PATH
/usr/local/bin/cWnn4/catod DESCRIPTION
This command converts a dictionary from text format into binary format. outfilename is the name of the binary format dictionary. If outfilename is not given, the output will be passed to the standard output device(stdout). The input file may be piped in by using the "<" command. For example, catod basic.dic < basic.u "basic.dic" here is the output binary format dictionary, while the "basic.u" is the input text format dictionary. If the input text dictionary is not given, the input will be taken from the standard input(stdin). To end the input via standard input, press ^D. OPTIONS
-s maxword To specify the maximum number of words. Default is 70000. -R To create a dictionary for both forward and reverse conversion. (Default). -r To create a reverse format dictionary only for reverse conversion. -e If the Hanzi inside the text dictionary contains characters such as space and tab, they will be compacted to special format. (Default). -S To create a static dictionary. -U To create a dynamic dictionary. -P dicpasswd To specify the password for the dictionary. If "-N" is used instead, the password of the dictionary will be set to "*". -p frepasswd To specify the password for the usage frequency file. If "-n" is used instead, the password of the frequency file will be set to "*". -h cixingfile To specify the Cixing definition file. NOTE
1. The parts in [ ] are options. They may be omitted. 2. The Pinyin and Zhuyin dictionary has the same format. 3. For details of the dictionary structure, refer to cWnn manual. 13 May 1992 CATOD(1)
All times are GMT -4. The time now is 04:12 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy