01-11-2017
As the extension you used was .nfo I'm guessing you wanted a file that uses DOS (codepage 437) characters. Quite commonly used in ANSI art.
Notepad++ is quite capable of viewing UTF-8, simply select Encoding->UTF-8.
The version of Notepad++ I have (v5.9.6.2) doesn't have CP437 encoding however most of the DOS graphics characters will display OK with Encoding->Character sets->Greek->OEM 737 or Hebrew->OEM 862
10 More Discussions You Might Find Interesting
1. Tips and Tutorials
The Solaris 9 and later CD ISO images are laid out differently than previous versions of the ISO images for Solaris.
If you just want to build a jumpstart and can afford the bandwidth to do so, download the Solaris DVD and use that instead. You don't need to do any of this with the DVD iso.
... (0 Replies)
Discussion started by: reborg
0 Replies
2. Shell Programming and Scripting
Hi folks,
I have the following configuration file,which contains list of directories:
/tmp> cat utils.conf
Backup
CPSync
Change_Listener_Port
Create_Database
Deinstall
Install_CPPlugin
Project_migrator
I have the following command in my ksh program:
mkisofs -l -L -R -V ${PACK_NAME}... (1 Reply)
Discussion started by: nir_s
1 Replies
3. Shell Programming and Scripting
Dear Friends,
I have an XML file that's encoded in ISO-8859-1. I have some European characters coming in from 2 fields (Name, Comments) in the XML file. Can anyone suggest if there are any functions in Unix to read those characters? Using shell programming, can I parse this xml file?
Please... (0 Replies)
Discussion started by: madhavim
0 Replies
4. Solaris
Hi All,
Please help me with this.
My plan is to create an ISO image of my current solaris 8 OS.Because we use a stripped out version of solaris 8 which is different than the standard one in CD. Will dd command will do ?
My idea is to create a VMware image from iso file and play it in... (6 Replies)
Discussion started by: Jartan
6 Replies
5. Shell Programming and Scripting
Hi all,
I got the following problem.
I run this command.
mkisofs -o mynew.iso -l -b isolinux.bin -c boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table base images isolinux pc_doc
and I get all files from base, images, isolinux and pc_doc in the root dir of the iso.
I want a... (2 Replies)
Discussion started by: stinkefisch
2 Replies
6. Red Hat
Hi All,
I want to create kick start bootable ISO file. I have Centos 5.4 ISO and customized ks.cfg file. Now I need to recreate ISO with ks.cfg and content of existing ISO.
During installation, it automatically should pick the kick start file and need to proceed with the installation.
... (0 Replies)
Discussion started by: kalpeer
0 Replies
7. Red Hat
Hi All,
I have one query on creating bootable ISO.
I have installed Centos 5.6 and done few configuration changes which is needed for deploying my App. Later I have deployed my app. Now Centos is up and running in a dedicated box along with my app.
Now I want to create the... (3 Replies)
Discussion started by: kalpeer
3 Replies
8. Linux
Hi,
I have tried to convert a UTF-8 file to windows UTF-16 format file as below from unix machine
unix2dos < testing.txt | iconv -f UTF-8 -t UTF-16 > out.txt
and i am getting some chinese characters as below which l opened the converted file on windows machine.
LANG=en_US.UTF-8... (3 Replies)
Discussion started by: phanidhar6039
3 Replies
9. Solaris
Hi Solaris 10 Experts,
How can I create an ISO Image of a CD/DVD from the cdrom to a temporary directory, and then use that image to burn it on a blank DVD in the cdrom in Solaris 10 1/13 OS environment?
Please provide me with an example.
With best regards,
SS (1 Reply)
Discussion started by: ssabet
1 Replies
10. Shell Programming and Scripting
Hello,
In my shell script, I extract table data from HP Vertica DB into a csv file using vsql -c command. But the problem is the file getting created is in binary format and hence some of the data becomes unreadable which has chinese characters as part of data.
file -i filename.csv - gives... (2 Replies)
Discussion started by: Dharmatheja
2 Replies
LEARN ABOUT DEBIAN
xml::um
UM(3pm) User Contributed Perl Documentation UM(3pm)
NAME
XML::UM - Convert UTF-8 strings to any encoding supported by XML::Encoding
SYNOPSIS
use XML::UM;
# Set directory with .xml files that comes with XML::Encoding distribution
# Always include the trailing slash!
$XML::UM::ENCDIR = '/home1/enno/perlModules/XML-Encoding-1.01/maps/';
# Create the encoding routine
my $encode = XML::UM::get_encode (
Encoding => 'ISO-8859-2',
EncodeUnmapped => &XML::UM::encode_unmapped_dec);
# Convert a string from UTF-8 to the specified Encoding
my $encoded_str = $encode->($utf8_str);
# Remove circular references for garbage collection
XML::UM::dispose_encoding ('ISO-8859-2');
DESCRIPTION
This module provides methods to convert UTF-8 strings to any XML encoding that XML::Encoding supports. It creates mapping routines from the
.xml files that can be found in the maps/ directory in the XML::Encoding distribution. Note that the XML::Encoding distribution does
install the .enc files in your perl directory, but not the.xml files they were created from. That's why you have to specify $ENCDIR as in
the SYNOPSIS.
This implementation uses the XML::Encoding class to parse the .xml file and creates a hash that maps UTF-8 characters (each consisting of
up to 4 bytes) to their equivalent byte sequence in the specified encoding. Note that large mappings may consume a lot of memory!
Future implementations may parse the .enc files directly, or do the conversions entirely in XS (i.e. C code.)
get_encode (Encoding => STRING, EncodeUnmapped => SUB)
The central entry point to this module is the XML::UM::get_encode() method. It forwards the call to the global $XML::UM::FACTORY, which is
defined as an instance of XML::UM::SlowMapperFactory by default. Override this variable to plug in your own mapper factory.
The XML::UM::SlowMapperFactory creates an instance of XML::UM::SlowMapper (and caches it for subsequent use) that reads in the .xml
encoding file and creates a hash that maps UTF-8 characters to encoded characters.
The get_encode() method of XML::UM::SlowMapper is called, finally, which generates an anonimous subroutine that uses the hash to convert
multi-character UTF-8 blocks to the proper encoding.
dispose_encoding ($encoding_name)
Call this to free the memory used by the SlowMapper for a specific encoding. Note that in order to free the big conversion hash, the user
should no longer have references to the subroutines generated by get_encode().
The parameters to the get_encode() method (defined as name/value pairs) are:
o Encoding
The name of the desired encoding, e.g. 'ISO-8859-2'
o EncodeUnmapped (Default: &XML::UM::encode_unmapped_dec)
Defines how Unicode characters not found in the mapping file (of the specified encoding) are printed. By default, they are converted
to decimal entity references, like '{'
Use &XML::UM::encode_unmapped_hex for hexadecimal constants, like '«'
CAVEATS
I'm not exactly sure about which Unicode characters in the range (0 .. 127) should be mapped to themselves. See comments in XML/UM.pm near
%DEFAULT_ASCII_MAPPINGS.
The encodings that expat supports by default are currently not supported, (e.g. UTF-16, ISO-8859-1), because there are no .enc files
available for these encodings. This module needs some more work. If you have the time, please help!
AUTHOR
Original Author is Enno Derksen.
Send bug reports, hints, tips, suggestions to T.J Mather at <tjmather@tjmather.com>.
perl v5.10.1 2010-01-03 UM(3pm)