Sponsored Content
Top Forums Shell Programming and Scripting Perl script to copy contents of a web page Post 302351259 by anand.linux1984 on Tuesday 8th of September 2009 12:20:52 AM
Old 09-08-2009
Quote:
Originally Posted by pludi
Question: why do you need the visual content of a page (with 50+ printed pages) in a word processor document? If it's for offline reading, why not convert it to PDF? If it's for archiving, why not save the HTML source?
Answer: You are right Pludi. Redhat.com has some (70+) webpages which is purely documentation. These webpages are needed to me.

So, wats my question is, i need these webpages as a word document ( or pdf id fine) for printing. Instead of printing it by page by page, i need a script to copy these ( 70+ ) pages locally to my machine and print it.

This is the URL which i am try do so.

"http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/index.html"

Thanks in advance.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

running shell script thru WEB page ....

....passing variable via list... here 's the HTML code extract : **************** <form method=post action=http://servername/cgi-bin/cgi-comptage_diff.ksh> <table border...........> .............. </table> <table bgcolor=#FFFFFF width="980"> ... (6 Replies)
Discussion started by: Nicol
6 Replies

2. Shell Programming and Scripting

How to direct a script to a new web page after a script got completed?

Hi , I have a cgi code with shell script on it.I am submitting a form in the first.cgi.These values are posted to second.cgi.Second.cgi do some process with these values. After that i want to direct my page from second.cgi to first.cgi. What is the command i can use from cgi(shell) script? ... (2 Replies)
Discussion started by: rajbal
2 Replies

3. Shell Programming and Scripting

PERL - copy fiel contents to array then compare against other file

Basically to illuminate i want to take a file with mutliple lines, C:\searching4theseletters.txt a b c Read this into an array @ARRAY and then use this to compare against another file C:\inputletters.txt b o a c n a (9 Replies)
Discussion started by: bradleykins
9 Replies

4. Shell Programming and Scripting

how to redirect to a web-page by shell script

Dear all, I am calling a korn shell script(CGI script) by a web-page. This shell script do some checking in a unix file and return true or false. Now within the same script, If it returns true then I want to redirect to another web-page stored in htdocs directory. Example: Login page sends a... (3 Replies)
Discussion started by: ravi18s
3 Replies

5. Shell Programming and Scripting

Perl Script to find & copy words from Web.

I need to write a perl script to search for a specific set of numbers that occur after a series of words but before another. Specifically, I need to locate the phrase today at the summit, then immediately prior to the words tonnes/day copy the number that will be between 100 and 9,999, for example,... (1 Reply)
Discussion started by: libertyforall
1 Replies

6. Shell Programming and Scripting

How to input a number in a web page and pass to a script?

I am working on an embedded linux router and trying to make a webpage where the user can input a desired number of CPE and have a script update that number on the router. I have a CLI where I can log in and type the following to change that number echo "20">/proc/net/dbrctl/maxcpe which then... (7 Replies)
Discussion started by: BobTheBulldog
7 Replies

7. HP-UX

Help running a unix script from a web page

First, let me state that I am completely out of my realm with this. I have a server running HPUX. I'm not even sure if this can be considered a UNIX question and for that let me apologize in advance. I need to create a web page where a client can input 2 variables (i.e. date and phone number).... (0 Replies)
Discussion started by: grinds
0 Replies

8. Shell Programming and Scripting

Random web page download wget script

Hi, I've been attempting to create a script that downloads web pages at random intervals to mimic typical user usage. However I'm struggling to link $url to the URL list and thus wget complains of a missing URL. Any ideas? Thanks #!/bin/sh #URL List url1="http://www.bbc.co.uk"... (14 Replies)
Discussion started by: shadyuk
14 Replies

9. Shell Programming and Scripting

Copy text from web page and add to file

I need help to make a script for Ubuntu to OSCam that copy the text on this website that only contains "C: ip port randomUSERNAME password" and want to exclude the text "C:" and replace the rest with the old in my test.server file. (line 22) device = ip,port (line 23) user =... (6 Replies)
Discussion started by: baxarn
6 Replies

10. Shell Programming and Scripting

Refresh web page in bash script

hello, I am trying to refresh my web page which is created in bash script. I have a HTML page which when press a button calls a bash script. this bash script created the same page with dynamic data. When pressing the button I am calling to a function that set time out of 7 seconds and and after... (1 Reply)
Discussion started by: SH78
1 Replies
PDF2TXT(1)							  PDFMiner Manual							PDF2TXT(1)

NAME
pdf2txt - extracts text contents of PDF files SYNOPSIS
pdf2txt [option...] file... DESCRIPTION
pdf2txt extracts text contents from a PDF file. It extracts all the text that is to be rendered programmatically, i.e. text represented as ASCII or Unicode strings. It cannot recognize text drawn as images that would require optical character recognition. It also extracts the corresponding locations, font names, font sizes, writing direction (horizontal or vertical) for each text portion. You need to provide a password for protected PDF documents when its access is restricted. You cannot extract any text from a PDF document which does not have extraction permission. OPTIONS
-o file Specifies the output file name. The default is to print the extracted contents to standand output in text format. -p pageno[,pageno,...] Specifies the comma-separated list of the page numbers to be extracted. Page numbers start at one. By default, it extracts text from all the pages. -c codec Specifies the output codec. -t type Specifies the output format. The following formats are currently supported: text Text format. This is the default. html HTML format. It is not recommended. xml XML format. It provides the most information. tag "Tagged PDF" format. A tagged PDF has its own contents annotated with HTML-like tags. pdf2txt tries to extract its content streams rather than inferring its text locations. Tags used here are defined in the PDF Reference, Sixth Edition[1] (S10.7 "Tagged PDF"). -D writing-mode Specifies the writing mode of text outputs: lr-tb Left-to-right, top-to-bottom. tb-rl Top-to-bottom, right-to-left. auto Determine writing mode automatically -M char-margin, -L line-margin, -W word-margin These are the parameters used for layout analysis. In an actual PDF file, text portions might be split into several chunks in the middle of its running, depending on the authoring software. Therefore, text extraction needs to splice text chunks. In the figure below, two text chunks whose distance is closer than the char-margin is considered continuous and get grouped into one. Also, two lines whose distance is closer than the line-margin is grouped as a text box, which is a rectangular area that contains a "cluster" of text portions. Furthermore, it may be required to insert blank characters (spaces) as necessary if the distance between two words is greater than the word-margin, as a blank between words might not be represented as a space, but indicated by the positioning of each word. Each value is specified not as an actual length, but as a proportion of the length to the size of each character in question. The default values are char-margin = 1.0, line-margin = 0.3, and W = 0.2, respectively. -n Suppress layout analysis. -A Force layout analysis for all the text strings, including text contained in figures. -V Enable detection of vertical writing. -s scale Specifies the output scale. This option can be used in HTML format only. -m n Specifies the maximum number of pages to extract. By default, all the pages in a document are extracted. -P password Provides the user password to access PDF contents. -d Increase the debug level. EXAMPLES
Extract text as an HTML file whose filename is output.html: $ pdf2txt -o output.html samples/naacl06-shinyama.pdf Extract a Japanese HTML file in vertical writing: $ pdf2txt -c euc-jp -D tb-rl -o output.html samples/jo.pdf Extract text from an encrypted PDF file: $ pdf2txt -P mypassword -o output.txt secret.pdf SEE ALSO
dumppdf(1) AUTHORS
Jakub Wilk <jwilk@debian.org> Wrote this manual page for the Debian system. Yusuke Shinyama <yusuke@cs.nyu.edu> Author of PDFMiner and its original HTML documentation. NOTES
1. PDF Reference, Sixth Edition http://www.adobe.com/devnet/acrobat/pdfs/pdf_reference_1-7.pdf pdf2txt 08/24/2011 PDF2TXT(1)
All times are GMT -4. The time now is 06:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy