Sponsored Content
Full Discussion: convert pdf's to word
Special Forums Windows & DOS: Issues & Discussions convert pdf's to word Post 66557 by locustfurnace on Tuesday 15th of March 2005 02:12:35 PM
Old 03-15-2005
Able2Extract
is the solution that allows you to convert data from PDF, HTML and Text source formats into formatted Excel spreadsheets, Word documents, HTML pages and text files.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

printing in postscript or convert from pdf?

hi - go easy on me, i'm new to UNIX... i need to resample huge pdf files and make them smaller. distiller won't resize existing pdf files. i thought i could print the pdf to a postscript file and then resample that. would that work? if so, how? is this the best way forward or should... (1 Reply)
Discussion started by: jono2000
1 Replies

2. HP-UX

Convert PCL to PDF

We're looking for a program that can convert Text and PCL formats into a PDF. We're not looking to spend a fortune so my search so far has come up empty handed as everyone wants an arm and a leg. I found txt2pdf, but that doesn't support PCL formats. Any help would be appreciated. TIA (2 Replies)
Discussion started by: Randle2I
2 Replies

3. UNIX and Linux Applications

looking for convert rtf to pdf utility

Hello all i have server that needs to add the ability to convert rtf files to pdf what you recommend me to use ( not open office please ) its spouse to process lots of files . something i can wrap in code and use (2 Replies)
Discussion started by: umen
2 Replies

4. Shell Programming and Scripting

Printer filter - convert to ps then to pdf

Hi, I currently have a driver located in /etc/cups/interfaces that does the following: job="$1" user="$2" title="$3" numcopies="$4" options="$5" filename="$6" # Set up printer default modes echo -e "\033E\c" echo -e "\033)0B\c" cat "$filename" So then I want to do something... (2 Replies)
Discussion started by: stuaz
2 Replies

5. Shell Programming and Scripting

Perl - Convert html to pdf - PDF::FromHTML

Hi, I am trying to convert html to pdf using perl module PDF::FromHTML, am getting the error as given below. not well-formed (invalid token) at line 2, column 17, byte 56 at C:/Perl/lib/XML/Parser.pm line 187 at C:/Perl/site/lib/PDF/FromHTML.pm line 140 The perl code is as given... (2 Replies)
Discussion started by: DILEEP410
2 Replies

6. Shell Programming and Scripting

convert solaris file to pdf

Hi Guys, How to convert a unix text file into pdf? Please Help!!!!!!!!!!!!!!!!!! Thanks (2 Replies)
Discussion started by: Phuti
2 Replies

7. Shell Programming and Scripting

Using CSH and need to convert html to PDF

Hi, I am currently using the below code and it throws an error saying badly placed ()'s. I am not sure if this right way to convert html to PDF, please help..I am using C-shell script system ("../html2doc pdf 1000 portrait html$prnfile.html $prnfile.pdf ") print "Content-type:application/pdf... (7 Replies)
Discussion started by: lakers646
7 Replies

8. Red Hat

How to convert TXT to PDF in RHEL 6?

Hello friends, I need to convert ASCII text to PDF on RHEL 6 so I did the below and could generate PDF but it has lot of junk/special characters. yum install enscript ghostscript enscript -p output.ps input.txt ps2pdf output.ps output.pdf So I download latest source of Ghostscript... (4 Replies)
Discussion started by: magnus29
4 Replies

9. Solaris

How to convert pdf file to txt?

Hello Unix gurus, I am learning unix. I have lots pdf data files. I need to convert them into txt files. Can you please guide me how to do that? Thanks in advance. Rao (1 Reply)
Discussion started by: raopatwari
1 Replies

10. Linux

Need software details that can convert PDF to PS

we have been trying to convert PDF into PS for one of the projects. Our PDF files are little big and we use PDFTOPS which takes long time to convert it and it's a freeware. Is there any licensed version of such kind of tool? Responses are really appreciated. (2 Replies)
Discussion started by: venureddy
2 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new $p = HTML::LinkExtor->new( $callback ) $p = HTML::LinkExtor->new( $callback, $base ) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be different from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2013-03-25 HTML::LinkExtor(3)
All times are GMT -4. The time now is 09:40 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy