Hi,
I have a non-ascii character (Ŵ), which can be represented in UTF-8 encoding as equivalent hex value (\xC5B4). Is there a function in unix to convert this hex value back to display the charcter ?
If your locale is set up correctly any number of utilities can display this character correctly. For example if your shell is ksh93 version s or better, printf "\xC5B4" will output the expected character.
Hi fpmurphy, thanks for the response.
can u pls elaborate on how and what locale do is set ? i am operating on solaris 5.8 OS.
also this is a part of the bigger problem. Actually i am transmitting the above character through a email message. so i extract this message from the mail server in unix, and decode it. What happens here is that the character is decoded as (Å´), which is actually 0xC5B4 annoted in ascii ( 0xC5 = Å, 0xB4 = ´).
So i want to take these (Å´) and convert to (Ŵ) directly or via their hex value (\xC5B4).
That means the decoding process treats the message as ISO-8859-1 (or ASCII) rather than UTF-8. There is no "conversion" going on here. It's simply the decoding process fails based on a wrong assumption of encoding.
Have you tried to investigate if anything is wrong that causes the message not to be interpreted as UTF-8? For instance, did you check the encoding in the mail header, was it erroneously specified as anything other than UTF-8? And you may try with other mail with UTF-8 and see if that is an issue with a particular mail (sometimes a misconfigured mail user agent is culprit) or a bigger issue. Try switching mail clients and see if you can always reproduce that.
Hi there,
I am actually using perl to retrieve message from the mailbox. the perl module for encoding/decoding (MIME-Base64-3.07 > MIME::Base64) is the one to be used, but while decoding it does decode to ascii/iso-8859-1 (while the mail header correctly shows the encoding as UTF-8).
In this case, if i want to convert this data to utf-8 back (as detailed above) , is there a command/way to do it in unix ?
I am not too sure about MIME::Base64 as I have not used it before. However, base64 itself is encoding-agnostic, that is, it encodes/decodes without regard to whatever encoding the original message is, because it is not only used to encode textual data, but also images, zip files or just about any binary data you can imagine, that do not have the notion of an "encoding" at all. So what Base64 sees and acts on, is just a bytestream. it doesn't really care what is inside.
So, for a text message:
In other words, you still need to manually handle the decoding to have Perl decode it as UTF-8 properly. By default, Perl treats everything as ASCII, so that may explain why you get the output wrong.
Perl has specific quirks with respect to Unicode. That really much depends on the version of Perl you are using. I have had a rather thorough investigation of Perl Unicode support in 5.8 branch, but not sure if any changes have been implemented in 5.10. If you have Perl 5.6 or earlier, chances are the Perl Unicode support is not adequate to ensure Unicode-safety.
I am unable to explain so much with so little space here. I recommend you start with the perluniintro manpage for further information:
Ok, in case you feel bewildered by that manpage (you probably will!), let me give you a series of examples to give you a general idea of some of the most important things you need to know.
As I'm Chinese, I'll use Chinese in the examples. All the code are in UTF-8.
Expected environment: a UTF-8 terminal with proper fonts to render Unicode text.
Test 1 - Let's start with this
你好吗?
Length: 12
This is made up of 4 Chinese characters, 3 bytes each in UTF-8. So, because Perl does not treat it as UTF-8 but rather ASCII, the length returned is 12. The terminal still renders the string properly because the bytes are returned verbatim to the terminal and the terminal tries to decode the bytestream as UTF-8, remember I assumed the terminal is properly configured to UTF-8 (but not Perl in this case)?
Test 2 - Recognize UTF-8 characters embedded in source code
Wide character in print at test.pl line 6.
你好吗?
Length: 4
Perl now recognizes the string as a 4-character UTF-8 string, but a warning is issued by Perl, because the output stream (stdout) is not configured to accept UTF-8 decoded strings.
Test 3 - Turn on UTF-8 mode on standard streams
Now the warning disappears. From the perspective of Perl, UTF-8 is now correctly handled.
But what about strings originated elsewhere (as in your case), rather than embedded in source code? We will need another way.
Test 4 - Use manual decoding
Same result as Test 3, but the decoding is manual. The source code is considered ASCII-encoded, and hence the string literal embedded, but the manual decode allows the decoding of the string literal back to a Perl UTF-8 string, so length() correctly reports the length afterwards.
These examples cover may be 80% of what you will need to know to have Perl process Unicode properly in the majority of cases. For the rest, you will need to consult the manpage.
Dears,
I have a shell script - working perfectly on Oracle Linux - that detects the encoding (the charset to be exact) of the files in a specified directory using the "file" command (The file command outputs the charset in Linux, but doesn't do that in AIX), then if the file isn't a UTF-8 text... (4 Replies)
I am trying to develop a script which will work on a source UTF-8 file and perform one or more of the following
It will accept the target encoding as an argument e.g. US-ASCII or ISO-8859-1, etc
1. It should replace all occurrences of characters outside target character set by " " (space) or... (3 Replies)
Hi All,
I am trying to obtain count of characters using awk, but "length" function returns a value of 1 for 2-byte or 3-byte characters as well unlike wc -c command.
I have tried to use the below commands within awk function, but it does not seem to work
{
cmd="wc -c "stringtocheck
( cmd )... (6 Replies)
Hello all
i have utf-8 file that i try to convert to WINDOWS-1251 on linux
without any success
the file name is utf-8 when i try to do :
file -bi test.txt
it gives me :
text/plain; charset=utf-8
when i try to convert the file i do :
/usr/bin/iconv -f UTF-8 -t WINDOWS-1251 test.txt >... (1 Reply)
Hi,
I have tried to convert a UTF-8 file to windows UTF-16 format file as below from unix machine
unix2dos < testing.txt | iconv -f UTF-8 -t UTF-16 > out.txt
and i am getting some chinese characters as below which l opened the converted file on windows machine.
LANG=en_US.UTF-8... (3 Replies)
Sometimes we recieve some excel files containing French/Japanese characters over the mail, and these files are manually transferred to the server by using SFTP (security is not a huge concern here). The data is changed to text format before transferring it using Notepad.
Problem is: When saving... (4 Replies)
can someone help me in converting hex streams to decimal values using perl script
Hex value:
$my_hex_stream="0c07ac14001676";
Every hex value in the above stream should be converted in to decimal and separated by comma.
The output should be: 12,07,172,20,00,22,118 (2 Replies)
I have a shell script running to load some data from a text file to database. Text file contains some non-ASCII characters like ü. How can i convert these characters to UTF-8 codes before loading to DB. (5 Replies)
Hello,
I am trying to convert a 7bit ASCII file to UTF-8.
I have used iconv before though it can't recognize it for some reason and says unknown file encoding.
When I used ascii2uni package with different package, ./ascii2uni -a K -a I -a J -a X test_file > new_test_file
It still... (2 Replies)
While working with russian text under FreeBSD&MySQL I need to convert a string from MySQL to the Unicode format.
I've just started my way in C++ under FreeBSD , so please explain me how can I get ascii code of Char variable and also how can i get a character into variable with the specified ascii... (3 Replies)