Sponsored Content
Top Forums Shell Programming and Scripting Problem identifying charset of a file Post 302301835 by sridhar_423 on Saturday 28th of March 2009 04:00:49 PM
Old 03-28-2009
I guess I found out what I was looking for after a series of tests
file -- This may not give correct output. In the above post, chars.txt gave utf-8 because chars.txt is saved to disk using utf-8 and utf-8 reserves first 3 bytes of the file to represent that its a unicode file which is encoded using utf-8

In my case, the file was generated using cp1256. So, if the first 512 bytes are ascii characters(I guess file checks for first 512 bytes.. i'm not 100% sure though. I simply added 1000 english characters to the beginning of the file), then it would display the file as ascii as the code points of cp1256 is same as ascii for <=127

Coming to the numbers in the file when opened using vi editor, they are the octals(base 8) of the code points. I performed the below test to confirm it
1. opened the file using vi and copied some of those numbers
2. Wrote a php program to convert the octals into decimal and print the corresponding character
As my computer uses 1256cp for representing the characters which fall outside of ascii range, it displayed arabic data. So, these numbers are nothing but the code points.

Thanks,
Sridhar
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Unix charset

Hi, How can I find out the charset on a Unix server (SUNOS 5.2)? I tried locale charmap and returned 646. What does 646 mean? If I send an xml file with encoding="utf-8", should the server be able to handle the file, even with special characters in it? Thanks. (0 Replies)
Discussion started by: iengca
0 Replies

2. Shell Programming and Scripting

identifying null values in a file

I have a huge file with 20 fileds in each record and each field is seperated by "|". If i want to get all the reocrds that have 18th or for that matter any filed as null how can i do it? Please let me know (3 Replies)
Discussion started by: dsravan
3 Replies

3. Shell Programming and Scripting

Identifying suffixes in a file and printing them out

Hello, I am interested in finding and identifying suffixes for Indian names through an awk script or a perl program. Suffixes normally are found at the end of a word as is shown in the sample given below. What I need is a perl script which will identify suffixes of a defined lenght to be given in... (4 Replies)
Discussion started by: gimley
4 Replies

4. UNIX for Dummies Questions & Answers

locale and glibc and charset

what's the relationship among locale, glibc, charset, charmap and fonts? why locale needs to be generated by glibc? how? what are in the locale-archive file? and what are in font files? (0 Replies)
Discussion started by: vistastar
0 Replies

5. Shell Programming and Scripting

Identifying the file completion

Hi, A script is running for multiple databases so data is also being populated for multiple DBs in a.txt file. I need to rename this file once all the data is populated. Kindly suggest me How can I check once file is populated completely before renaming? Thanks in advance. (3 Replies)
Discussion started by: ravigupta2u
3 Replies

6. UNIX for Advanced & Expert Users

ISO 88591 file encoding charset in Linux

Hello Experts, please help to provide any insight as I am facing issue migrating java application from hpux to redhat. The java program is using InputStreamReader to read a file without specifying any charset parameter. However, in new Linux Redhat 5.6 environent, when reading a file that... (1 Reply)
Discussion started by: sonic_air
1 Replies

7. Shell Programming and Scripting

Identifying presence and name of new file(s)?

I have an HP-UX server that runs a script each night. The script connects to an SFTP server and downloads all xml files (if any are present) from a certain folder, and then deletes the files from the SFTP server. So sometimes it will download a new file, sometimes it will download 2 or 3 new... (4 Replies)
Discussion started by: lupin..the..3rd
4 Replies

8. Shell Programming and Scripting

Identifying Missing File Sequence

Hi, I have a file which contains few columns and the first column has the file names, and I would like to identify the missing file sequence number form the file and would copy to another file. My files has data in below format. APKRISPSIN320131231201319_0983,1,54,125,... (5 Replies)
Discussion started by: rramkrishnas
5 Replies

9. Red Hat

How to load a charset on RHEL 6.6 ?

Hi all, am running the following code on a RHEL 6.6 box to list which charsets are loaded and which are available: #!/usr/bin/perl -w use strict; use Encode; my @list = Encode->encodings(); my @all_encodings = Encode->encodings(":all"); print "@list\n\n"; print "@all_encodings\n"; ... (3 Replies)
Discussion started by: Fundix
3 Replies

10. Shell Programming and Scripting

Identifying missing file dates

Hi Experts, I have written the below script to check the missing files based on the date in the file name from current date to in a given interval of days. In the file names we have dates along with some name. ex:jera_sit_2017-04-25-150325.txt. The below script is working fine if we have only... (10 Replies)
Discussion started by: nalu
10 Replies
PRECONV(1)						      General Commands Manual							PRECONV(1)

NAME
preconv - convert encoding of input files to something GNU troff understands SYNOPSIS
preconv [-dr] [-e encoding] [files ...] preconv -h | --help preconv -v | --version It is possible to have whitespace between the -e command line option and its parameter. DESCRIPTION
preconv reads files and converts its encoding(s) to a form GNU troff(1) can process, sending the data to standard output. Currently, this means ASCII characters and '[uXXXX]' entities, where 'XXXX' is a hexadecimal number with four to six digits, representing a Unicode input code. Normally, preconv should be invoked with the -k and -K options of groff. OPTIONS
-d Emit debugging messages to standard error (mainly the used encoding). -Dencoding Specify default encoding if everything fails (see below). -eencoding Specify input encoding explicitly, overriding all other methods. This corresponds to groff's -Kencoding option. Without this switch, preconv uses the algorithm described below to select the input encoding. --help -h Print help message. -r Do not add .lf requests. --version -v Print version number. USAGE
preconv tries to find the input encoding with the following algorithm. 1. If the input encoding has been explicitly specified with option -e, use it. 2. Otherwise, check whether the input starts with a Byte Order Mark (BOM, see below). If found, use it. 3. Finally, check whether there is a known coding tag (see below) in either the first or second input line. If found, use it. 4. If everything fails, use a default encoding as given with option -D, by the current locale, or 'latin1' if the locale is set to 'C', 'POSIX', or empty (in that order). Note that the groff program supports a GROFF_ENCODING environment variable which is eventually expanded to option -k. Byte Order Mark The Unicode Standard defines character U+FEFF as the Byte Order Mark (BOM). On the other hand, value U+FFFE is guaranteed not be a Unicode character at all. This allows to detect the byte order within the data stream (either big-endian or lower-endian), and the MIME encodings 'UTF-16' and 'UTF-32' mandate that the data stream starts with U+FEFF. Similarly, the data stream encoded as 'UTF-8' might start with a BOM (to ease the conversion from and to UTF-16 and UTF-32). In all cases, the byte order mark is not part of the data but part of the encoding protocol; in other words, preconv's output doesn't contain it. Note that U+FEFF not at the start of the input data actually is emitted; it has then the meaning of a 'zero width no-break space' character - something not needed normally in groff. Coding Tags Editors which support more than a single character encoding need tags within the input files to mark the file's encoding. While it is pos- sible to guess the right input encoding with the help of heuristic algorithms for data which represents a greater amount of a natural lan- guage, it is still just a guess. Additionally, all algorithms fail easily for input which is either too short or doesn't represent a natu- ral language. For these reasons, preconv supports the coding tag convention (with some restrictions) as used by GNU Emacs and XEmacs (and probably other programs too). Coding tags in GNU Emacs and XEmacs are stored in so-called File Variables. preconv recognizes the following syntax form which must be put into a troff comment in the first or second line. -*- tag1: value1; tag2: value2; ... -*- The only relevant tag for preconv is 'coding' which can take the values listed below. Here an example line which tells Emacs to edit a file in troff mode, and to use latin2 as its encoding. ." -*- mode: troff; coding: latin-2 -*- The following list gives all MIME coding tags (either lowercase or uppercase) supported by preconv; this list is hard-coded in the source. big5, cp1047, euc-jp, euc-kr, gb2312, iso-8859-1, iso-8859-2, iso-8859-5, iso-8859-7, iso-8859-9, iso-8859-13, iso-8859-15, koi8-r, us-ascii, utf-8, utf-16, utf-16be, utf-16le In addition, the following hard-coded list of other tags is recognized which eventually map to values from the list above. ascii, chinese-big5, chinese-euc, chinese-iso-8bit, cn-big5, cn-gb, cn-gb-2312, cp878, csascii, csisolatin1, cyrillic-iso-8bit, cyrillic-koi8, euc-china, euc-cn, euc-japan, euc-japan-1990, euc-korea, greek-iso-8bit, iso-10646/utf8, iso-10646/utf-8, iso-latin-1, iso-latin-2, iso-latin-5, iso-latin-7, iso-latin-9, japanese-euc, japanese-iso-8bit, jis8, koi8, korean-euc, korean-iso-8bit, latin-0, latin1, latin-1, latin-2, latin-5, latin-7, latin-9, mule-utf-8, mule-utf-16, mule-utf-16be, mule-utf-16-be, mule-utf-16be-with-signature, mule-utf-16le, mule-utf-16-le, mule-utf-16le-with-signature, utf8, utf-16-be, utf-16-be-with-signature, utf-16be-with-signature, utf-16-le, utf-16-le-with-signature, utf-16le-with-signature Those tags are taken from GNU Emacs and XEmacs, together with some aliases. Trailing '-dos', '-unix', and '-mac' suffixes of coding tags (which give the end-of-line convention used in the file) are stripped off before the comparison with the above tags happens. Iconv Issues preconv by itself only supports three encodings: latin-1, cp1047, and UTF-8; all other encodings are passed to the iconv library functions. At compile time it is searched and checked for a valid iconv implementation; a call to 'preconv --version' shows whether iconv is used. BUGS
preconv doesn't support local variable lists yet. This is a different syntax form to specify local variables at the end of a file. SEE ALSO
groff(1) the GNU Emacs and XEmacs info pages COPYING
Copyright (C) 2006-2014 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be included in translations approved by the Free Software Foundation instead of in the original English. Groff Version 1.22.3 10 February 2018 PRECONV(1)
All times are GMT -4. The time now is 11:36 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy