12-07-2012
Hi.
Quote:
Originally Posted by
gimley
... Does PERL give problems with Unicode? ...
You might want to start with:
perldoc perlunitu, then
man perlunicode
You seem to be using Windows. I have used the utf8 facilities on GNU/Linux systems, but I have no idea whether that might be available in/with ActiveState Perl.
Doing an advanced search here for
perl utf8 yields about 50 hits, some of which may be useful.
Best wishes ... cheers, drl
( Edit 1: add note about advanced search )
Last edited by drl; 12-07-2012 at 07:43 AM..
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi - I tried to remove ^M in a delimited file using "tr -d "\r" and "sed 's/^M//g'", but it does not work quite well. While the ^M is removed, the format of the record is still cut in half, like
a,b, c
c,d,e
The delimited file is generated using sh script by outputing a SQL query result to... (7 Replies)
Discussion started by: sirahc
7 Replies
2. Shell Programming and Scripting
Hi Experts
I am very new to perl and need to make a script using perl.
I would like to remove blanks in a text tab delimited file in in a specfic column range ( colum 21 to column 43) sample input and output shown below :
Input:
117 102 650 652 654 656
117 93 95... (3 Replies)
Discussion started by: Faisal Riaz
3 Replies
3. Shell Programming and Scripting
Hey there - a bit of background on what I'm trying to accomplish, first off. I am trying to load the data from a pipe delimited file into a database. The loading tool that I use cannot handle embedded newline characters within a field, so I need to scrub them out.
Solutions that I have tried... (7 Replies)
Discussion started by: bbetteridge
7 Replies
4. Shell Programming and Scripting
I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record.
Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies
5. Shell Programming and Scripting
Hi All
I wanted to know how to effectively delete some columns in a large tab delimited file.
I have a file that contains 5 columns and almost 100,000 rows
3456 f g t t
3456 g h
456 f h
4567 f g h z
345 f g
567 h j k lThis is a very large data file and tab delimited.
I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies
6. Shell Programming and Scripting
Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions.
ls file.list>$split
for gsfile in `cat $split`;
do
csplit... (17 Replies)
Discussion started by: verge
17 Replies
7. Shell Programming and Scripting
Hi,
I have the following command in place
nawk -F, '!a++' file > file.uniq
It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error:
bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies
8. Shell Programming and Scripting
I am working on a homonym dictionary of names i.e. names which are clustered together according to their “sound-alike” pronunciation:
An example will make this clear:
Since the dictionary is manually constructed it often happens that inadvertently two sets of “homonyms” which should be grouped... (2 Replies)
Discussion started by: gimley
2 Replies
9. UNIX for Advanced & Expert Users
I have a file size is around 24 G with 14 columns, delimiter with "|"
My requirement- can anyone provide me the fastest and best to get the below results
Number of records of the file
First column and second Column- Unique counts
Thanks for your time
Karti
------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies
10. Shell Programming and Scripting
I have a large file 1.5 gb and want to sort the file.
I used the following AWK script to do the job
!x++
The script works but it is very slow and takes over an hour to do the job. I suspect this is because the file is not sorted.
Any solution to speed up the AWk script or a Perl script would... (4 Replies)
Discussion started by: gimley
4 Replies
LEARN ABOUT SUSE
dpbindic
DPBINDIC(1) General Commands Manual DPBINDIC(1)
NAME
dpbindic - Convert a binary-form dictionary into a text-form dictionary
SYNOPSYS
dpbindic [ -xiu [ frequency ] ] binary-file [ text-file ]
DESCRIPTION
dpbindic outputs the file information of the binary-form dictionary file specified in binary-file . At this time, the word information of
the dictionary can be output in text form to the standard output. To do so, use test-file to specify the text-form dictionary used as the
source of binary-form dictionary file. If this specification is omitted, the text dictionary file information in the binary dictionary
file will be output. The standard grammar file name is /usr/local/canna/lib/dic/hyoujun.gram. It will be used if the grammar file name
specification is omitted. The output format of word information data is specified using an option.
OPTIONS
-x Outputs the data without using omission symbol @, which is used when the initial word represents the reading.
-i Replaces the reading and word for output.
-u Outputs the candidates used in conversion. Outputs all candidates having frequency or more. If frequency is omitted, all candi-
dates having frequency 1 will be output.
EXAMPLES
(1) If the text-form dictionary file name is omitted:
%dibindic iroha.cbd
(Text dictionary file name = Directory size + Word size, packed)
iroha.swd = 2985 + 5306 pak a4
iroha.mwd = 36276 + 113139 pak a4
(2) If the text-form dictionary file name iroha.mwd is specified:
%dpbindic iroha.cbd iroha.mwd
(Text dictionary file name = Directory size + Word size, packed)
iroha.mwd = 36276 + 113139 pak a4
SEE ALSO
mkbindic(1), dicar(1)
DPBINDIC(1)