12-01-2015
The description of your problem is extremely confusing. UTF-8, UTF-16, and UTF-32 are completely different character sets and if you have a single file that contains characters from all three, determining which bytes in that file represent a <newline> character may be impossible unless you can clearly describe byte offsets in your file where there are shifts from one codeset to another and clearly describe how any program reading this file can determine what codeset is in use for any particular byte in that file.
If you are reading a file that is entirely encoded in UTF-8 (in which characters can be encoded with one to six bytes), you could tell your script that the UTF-8 input file was instead a file encoded in ISO 8859-1 (in which all characters are one byte) and count characters in lines in awk using the length() function since the <newline> character is encoded the same way in both codesets.
But, since you haven't described what the rest of your awk program is doing, we have no way to guess at whether or not this option might work for you and no way to guess if there might be other options.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I am trying to use grep to find strings of certain lengths that all start with the same letter. Is this possible?:confused: (4 Replies)
Discussion started by: crabtruck
4 Replies
2. UNIX for Advanced & Expert Users
Hi,
I have a non-ascii character (Ŵ), which can be represented in UTF-8 encoding as equivalent hex value (\xC5B4). Is there a function in unix to convert this hex value back to display the charcter ? (10 Replies)
Discussion started by: sumirmehta
10 Replies
3. Solaris
Hi
this question applies to Solaris 8,9,10 and opensolaris as in my environment it applies to all of these
Is there a limit on the size of the username (in /etc/passwd) or indeed does there come a point where, like the 8 character limitation of passwords, the system receives the input but... (6 Replies)
Discussion started by: hcclnoodles
6 Replies
4. Shell Programming and Scripting
Hi there !
I need to treat files with variable line length, and process the tab-delimited words of each line. The tools I know are some basic bash scripting and sed ... I haven't got to python or perl yet.
So my file looks like this
obj1 0.01953 0.34576 0.04418 0.01249
obj2 0.78140... (7 Replies)
Discussion started by: jossojjos
7 Replies
5. Shell Programming and Scripting
Hi all,
Sorry if someone has answered something like this already, but I have a problem. I am not brilliant with "awk" but think it should be the command to use to get what I am after.
I have 2 files:
job-file (several hundred lines like):
1018003,LONG MU WAN,1113S
1018004,LONG MU... (4 Replies)
Discussion started by: sgb2301
4 Replies
6. Shell Programming and Scripting
I have a shell script running to load some data from a text file to database. Text file contains some non-ASCII characters like ü. How can i convert these characters to UTF-8 codes before loading to DB. (5 Replies)
Discussion started by: vel4ever
5 Replies
7. UNIX for Dummies Questions & Answers
Sometimes we recieve some excel files containing French/Japanese characters over the mail, and these files are manually transferred to the server by using SFTP (security is not a huge concern here). The data is changed to text format before transferring it using Notepad.
Problem is: When saving... (4 Replies)
Discussion started by: jawsnnn
4 Replies
8. Shell Programming and Scripting
Hi there,
I have two very long files like:
file1: two fields
1 123
1 125
1 234
2 123
2 234
2 300
2 312
3 10
3 215
4 56
... (11 Replies)
Discussion started by: ClaraW
11 Replies
9. Linux
Hi,
I have tried to convert a UTF-8 file to windows UTF-16 format file as below from unix machine
unix2dos < testing.txt | iconv -f UTF-8 -t UTF-16 > out.txt
and i am getting some chinese characters as below which l opened the converted file on windows machine.
LANG=en_US.UTF-8... (3 Replies)
Discussion started by: phanidhar6039
3 Replies
10. Shell Programming and Scripting
I have three files of varying lengths and different number of columns. How can I paste all three with all columns aligned?
File1
----
123
File2
----
234
345
678
File3
----
456
789
Output should look like:
123 234 456
345 789 (6 Replies)
Discussion started by: Un1xNewb1e
6 Replies
shiftjis(5) File Formats Manual shiftjis(5)
NAME
shiftjis, SJIS - A character encoding system (codeset) for Japanese
DESCRIPTION
The Shift JIS (SJIS) codeset consists of the following character sets: JIS X0201 JIS X0208 User-Defined Characters (UDC)
Shift JIS Encoding
Shift JIS character codes use a combination of single-byte data and 2-byte data to represent characters defined in the JIS X0201 and JIS
X0208 standards and in the UDC area.
All JIS X0201 characters are represented in the form of single-byte data. The Roman letters in JIS X0201 are encoded by setting the most
significant bit (MSB) of each byte to off, while the Katakana characters are encoded by setting the most significant bit (MSB) of each byte
to on. For more information on JIS X0201 characters, refer to deckanji(5). In the Super DEC Kanji codeset, the code ranges for JIS X0201
characters are as follows: For Roman letters, 00 to 7F For Katakana characters, A1 to DF
JIS X0208 characters are encoded in 2-byte values. The values for the first bytes are encoded so that they fall outside the range of byte
values for JIS X0201 characters (in other words, the JIS X0208 first byte ranges are from 81 to 9F and from E0 to FC). In this manner,
characters from the two different standards can be supported by the same codeset. The range for the second byte of a JIS X0208 character is
40 to FC (except for 7F). For more information on JIS X0208 characters, refer to deckanji(5).
The Shift JIS codeset provides for 2444 UDC characters. These are encoded as 2-byte values whose code range is F040 to FCFC.
Font Support for Super DEC Kanji
For display devices, the operating system supports Super DEC Kanji encoding by conversion to DEC Kanji encoding and then using fonts avail-
able for DEC Kanji. Refer to the iconv_intro(5) reference page for information on codeset conversion.
For printers, the operating system supports only printer-resident fonts; therefore, Super DEC Kanji fonts cannot be dynamically loaded to a
printer. For general information on printing non-English text, refer to i18n_printing(5).
SEE ALSO
Commands: locale(1)
Others: ascii(5), deckanji(5), eucJP(5), i18n_intro(5), i18n_printing(5), iconv_intro(5), iso2022jp(5), Japanese(5), jiskanji(5),
l10n_intro(5), sdeckanji(5)
shiftjis(5)