12-07-2012
Hello,
I found the problem. BOM: Byte Order mark
Normally under windows a UTF-8 file starts with a BOM (byte order mark, U+FEFF), as is standard for UTF-8 files on Windows systems. I concede that it is legal for them to do so, but it is utterly pointless since the byte order is determined by the formal specification of the UTF-8 representation itself. And it just happens that, unlike the rest of UTF-8, an initial BOM will screw up a Unix system. And Perl is supposed to be
Quote:
"an oasis of Unix culture in the desert of can't-get-there-from here" (Larry Wall, probably slightly misquoted).
Using a hex editor I removed the FEFF and it worked like a charm.
On Linux you should have no problem, since this aberration does not exist ina Unix system
Many thanks for trying to solve the mystery.
As an aid to all of us who suffer the tyranny of the WinOS system, here is a useful link:
HTML Code:
http://www.perlmonks.org/?node_id=599720.
This offers two solutions for the problem. Googling
Quote:
"perl bom" or "perl File::BOM"
comes up with more if needed.
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi - I tried to remove ^M in a delimited file using "tr -d "\r" and "sed 's/^M//g'", but it does not work quite well. While the ^M is removed, the format of the record is still cut in half, like
a,b, c
c,d,e
The delimited file is generated using sh script by outputing a SQL query result to... (7 Replies)
Discussion started by: sirahc
7 Replies
2. Shell Programming and Scripting
Hi Experts
I am very new to perl and need to make a script using perl.
I would like to remove blanks in a text tab delimited file in in a specfic column range ( colum 21 to column 43) sample input and output shown below :
Input:
117 102 650 652 654 656
117 93 95... (3 Replies)
Discussion started by: Faisal Riaz
3 Replies
3. Shell Programming and Scripting
Hey there - a bit of background on what I'm trying to accomplish, first off. I am trying to load the data from a pipe delimited file into a database. The loading tool that I use cannot handle embedded newline characters within a field, so I need to scrub them out.
Solutions that I have tried... (7 Replies)
Discussion started by: bbetteridge
7 Replies
4. Shell Programming and Scripting
I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record.
Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies
5. Shell Programming and Scripting
Hi All
I wanted to know how to effectively delete some columns in a large tab delimited file.
I have a file that contains 5 columns and almost 100,000 rows
3456 f g t t
3456 g h
456 f h
4567 f g h z
345 f g
567 h j k lThis is a very large data file and tab delimited.
I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies
6. Shell Programming and Scripting
Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions.
ls file.list>$split
for gsfile in `cat $split`;
do
csplit... (17 Replies)
Discussion started by: verge
17 Replies
7. Shell Programming and Scripting
Hi,
I have the following command in place
nawk -F, '!a++' file > file.uniq
It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error:
bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies
8. Shell Programming and Scripting
I am working on a homonym dictionary of names i.e. names which are clustered together according to their “sound-alike” pronunciation:
An example will make this clear:
Since the dictionary is manually constructed it often happens that inadvertently two sets of “homonyms” which should be grouped... (2 Replies)
Discussion started by: gimley
2 Replies
9. UNIX for Advanced & Expert Users
I have a file size is around 24 G with 14 columns, delimiter with "|"
My requirement- can anyone provide me the fastest and best to get the below results
Number of records of the file
First column and second Column- Unique counts
Thanks for your time
Karti
------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies
10. Shell Programming and Scripting
I have a large file 1.5 gb and want to sort the file.
I used the following AWK script to do the job
!x++
The script works but it is very slow and takes over an hour to do the job. I suspect this is because the file is not sorted.
Any solution to speed up the AWk script or a Perl script would... (4 Replies)
Discussion started by: gimley
4 Replies
LEARN ABOUT CENTOS
genbrk
GENBRK(1) ICU 50.1.2 Manual GENBRK(1)
NAME
genbrk - Compiles ICU break iteration rules source files into binary data files
SYNOPSIS
genbrk [ -h, -?, --help ] [ -V, --version ] [ -c, --copyright ] [ -v, --verbose ] [ -d, --destdir destination ] [ -i, --icudatadir direc-
tory ] -r, --rules rule-file -o, --out output-file
DESCRIPTION
genbrk reads the break (boundary) rule source code from rule-file and creates a break iteration data file. Normally this data file has the
.brk extension.
The details of the rule syntax can be found in ICU's User Guide.
OPTIONS
-h, -?, --help
Print help about usage and exit.
-V, --version
Print the version of genbrk and exit.
-c, --copyright
Embeds the standard ICU copyright into the output-file.
-v, --verbose
Display extra informative messages during execution.
-d, --destdir destination
Set the destination directory of the output-file to destination.
-i, --icudatadir directory
Look for any necessary ICU data files in directory. For example, the file pnames.icu must be located when ICU's data is not built
as a shared library. The default ICU data directory is specified by the environment variable ICU_DATA. Most configurations of ICU
do not require this argument.
-r, --rules rule-file
The source file to read.
-o, --out output-file
The output data file to write.
CAVEATS
When the rule-file contains a byte order mark (BOM) at the beginning of the file, which is the Unicode character U+FEFF, then the rule-file
is interpreted as Unicode. Without the BOM, the file is interpreted in the current operating system default codepage. In order to elimi-
nate any ambiguity of the encoding for how the rule-file was written, it is recommended that you write this file in UTF-8 with the BOM.
ENVIRONMENT
ICU_DATA Specifies the directory containing ICU data. Defaults to /usr/share/icu/50.1.2/. Some tools in ICU depend on the presence of the
trailing slash. It is thus important to make sure that it is present if ICU_DATA is set.
AUTHORS
George Rhoten
Andy Heninger
VERSION
1.0
COPYRIGHT
Copyright (C) 2005 International Business Machines Corporation and others
SEE ALSO
http://www.icu-project.org/userguide/boundaryAnalysis.html
ICU MANPAGE
2 December 2005 GENBRK(1)