Sponsored Content
Top Forums Shell Programming and Scripting Performing extractions on web source page Post 302550921 by agama on Sunday 28th of August 2011 07:26:05 PM
Old 08-28-2011
It would be helpful to see the original line that you extract with grep, and the original sed.

I can only guess, but it sounds like there might be two instances of one of the words. Consider this line of text:

Code:
This text is a sample for the use of pattern matching in text

If the words that you are looking to capture things between are text and pattern, and you use the set of sed replacements shown below, you'll get nothing on the output:

Code:
s/.*text//; s/pattern.*//

The reason this produces nothing as output is because the first match matches the whole line and deletes it.

If you are certain that word two only appears once on the line, then reversing the sed substitution commands will help, but it's not foolproof.

And, you don't need to cat a file to sed. Sed is capable of reading the file and thus you'll reduce overhead:

Code:
sed 's/foo/bar/' input-flie >new-file

This User Gave Thanks to agama For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Which comand to show the source code on a web page?

Hi folks! I am using MacOsX that runs freeBSD. Could you tell me what comand to type on the Unix Terminal to display on the terminal the source code of a certain web page? I think something like #<comand> http://www.apple.com will display on the terminal's window the html source code... (11 Replies)
Discussion started by: fundidor
11 Replies

2. UNIX for Dummies Questions & Answers

reading web page source in unix

is there a command that allows you to take a url and grab the source code from the page and output it to stdout? i want to know because i want to grab a page and pass it thru another program to analyze the page. any help would be appreciated thanks (3 Replies)
Discussion started by: jaymzlee
3 Replies

3. Shell Programming and Scripting

write page source to standard output

I'm new to PERL, but I want to take the page source and write it to a file or standard output. I used perl.org as a test website. Here is the script: use strict; use warnings; use LWP::Simple; getprint('http://www.perl.org') or die 'Unable to get page'; exit 0; ... (1 Reply)
Discussion started by: wxornot
1 Replies

4. UNIX for Dummies Questions & Answers

Looking for a web page that won't let me in

Hi, I have a project for school using wget and egrep to locate pattern locations on a web page. One of the things we have to do is handle an "access denied" exception. Here is the problem, I can not think of/find any web pages that give me an access denied error to play with, can anyone suggest... (1 Reply)
Discussion started by: njmiano
1 Replies

5. Shell Programming and Scripting

Getting source code of a page

I want to download a particular page from the internet and get the source code of the page in html format. I want to parse the source code to find a specific parameters using grep command. could someone tell me the linux command to download a specific page and parse the source code of it. ... (1 Reply)
Discussion started by: ahamed
1 Replies

6. Shell Programming and Scripting

web page source cleanup

is it possible to pass webpages to remove all tag style information, but leave the tag... say I have <h1 style='font-size: xxx; color: xxxxxx'>headline 1</h1> i want to get <h1>headline 1</h1> BTW, i got an oneliner here to remove all tags: sed -n '/^$/!{s/<*>//g;p; Thanks a... (4 Replies)
Discussion started by: dtdt
4 Replies

7. Shell Programming and Scripting

Save page source, including javascript

I need to get the source code of a webpage. I have tried to use wget and curl, but it doesn't show the necessary javascript part of the source. I don't have to execute it, only to view the source. How do I do that? (1 Reply)
Discussion started by: locoroco
1 Replies

8. Shell Programming and Scripting

Parse Page Source and Extract Links

Hi Friends, I have a bunch of URLs. Each URL will open up an abstract page. But, the source contains a link to the main PDF article. I am looking for a script to do the following task 1. Read input file with URLs. 2. Parse the source and grab all the lines that has the word 'PDF'.... (1 Reply)
Discussion started by: jacobs.smith
1 Replies

9. Shell Programming and Scripting

Dump web page source as rendered by browser

Hi guys| I need to retrieve a specific .m3u8 link from a web page, which makes use of iframes and JavaScript I tried to get the full source with "wget", "lynx", "w3m" and "phantomjs", but they can't dump all the source, with the part containing the link that i need, which seems to be inside... (0 Replies)
Discussion started by: Marmz
0 Replies
DGET(1) 																   DGET(1)

NAME
dget -- Download Debian source and binary packages SYNOPSIS
dget [options] URL ... dget [options] package[=version] DESCRIPTION
dget downloads Debian packages. In the first form, dget fetches the requested URLs. If this is a .dsc or .changes file, then dget acts as a source-package aware form of wget: it also fetches any files referenced in the .dsc/.changes file. The downloaded source is then checked with dscverify and, if successful, unpacked by dpkg-source. In the second form, dget downloads a binary package (i.e., a .deb file) from the Debian mirror configured in /etc/apt/sources.list(.d). Unlike apt-get install -d, it does not require root privileges, writes to the current directory, and does not download dependencies. If a version number is specified, this version of the package is requested. In both cases dget is capable of getting several packages and/or URLs at once. (Note that .udeb packages used by debian-installer are located in separate packages files from .deb packages. In order to use .udebs with dget, you will need to have configured apt to use a packages file for component/debian-installer). Before downloading files listed in .dsc and .changes files, and before downloading binary packages, dget checks to see whether any of these files already exist. If they do, then their md5sums are compared to avoid downloading them again unnecessarily. dget also looks for matching files in /var/cache/apt/archives and directories given by the --path option or specified in the configuration files (see below). Finally, if downloading (.orig).tar.gz or .diff.gz files fails, dget consults apt-get source --print-uris. Download backends used are curl and wget, looked for in that order. dget was written to make it easier to retrieve source packages from the web for sponsor uploads. For checking the package with debdiff, the last binary version is available via dget package, the last source version via apt-get source package. OPTIONS
-b, --backup Move files that would be overwritten to ./backup. -q, --quiet Suppress wget/curl non-error output. -d, --download-only Do not run dpkg-source -x on the downloaded source package. This can only be used with the first method of calling dget. -x, --extract Run dpkg-source -x on the downloaded source package to unpack it. This option is the default and can only be used with the first method of calling dget. -u, --allow-unauthenticated Do not attempt to verify the integrity of downloaded source packages using dscverify. --build Run dpkg-buildpackage -b -uc on the downloaded source package. --path DIR[:DIR ...] In addition to /var/cache/apt/archives, dget uses the colon-separated list given as argument to --path to find files with a matching md5sum. For example: "--path /srv/pbuilder/result:/home/cb/UploadQueue". If DIR is empty (i.e., "--path ''" is specified), then any previously listed directories or directories specified in the configuration files will be ignored. This option may be specified multiple times, and all of the directories listed will be searched; hence, the above example could have been written as: "--path /srv/pbuilder/result --path /home/cb/UploadQueue". --insecure Allow SSL connections to untrusted hosts. --no-cache Bypass server-side HTTP caches by sending a Pragma: no-cache header. -h, --help Show a help message. -V, --version Show version information. CONFIGURATION VARIABLES
The two configuration files /etc/devscripts.conf and ~/.devscripts are sourced by a shell in that order to set configuration variables. Command line options can be used to override configuration file settings. Environment variable settings are ignored for this purpose. The currently recognised variable is: DGET_PATH This can be set to a colon-separated list of directories in which to search for files in addition to the default /var/cache/apt/archives. It has the same effect as the --path command line option. It is not set by default. DGET_UNPACK Set to 'no' to disable extracting downloaded source packages. Default is 'yes'. DGET_VERIFY Set to 'no' to disable checking signatures of downloaded source packages. Default is 'yes'. BUGS AND COMPATIBILITY
dget package should be implemented in apt-get install -d. Before devscripts version 2.10.17, the default was not to extract the downloaded source. Set DGET_UNPACK=no to revert to the old behaviour. AUTHOR
This program is Copyright (C) 2005-08 by Christoph Berg <myon@debian.org>. Modifications are Copyright (C) 2005-06 by Julian Gilbey <jdg@debian.org>. This program is licensed under the terms of the GPL, either version 2 of the License, or (at your option) any later version. SEE ALSO
apt-get(1), debcheckout(1), debdiff(1), dpkg-source(1), curl(1), wget(1). Debian Utilities 2013-12-23 DGET(1)
All times are GMT -4. The time now is 08:05 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy